To change the world, change the stories humans believe.

The kingdom of matter stores its treasures on many levels. Until recently, we thought there was only one. We had no idea there were others.

When we strike a match, a chemical reaction liberates energy stored in the molecules. Old chemical bonds break and new ones are forged. Now, the adjacent molecules begin to move faster and the temperature increases. Soon, the process becomes self-propagated, a kind of chain reaction.

The energy represented by a flame has been locked, perhaps for many years, in chemical bonds between atoms. Mediated by the electrons that revolved around their core. When we make a fire, we release this hidden chemical energy.

But there is a deeper level of matter that houses another kind of energy. Inside the heart of the atom, its nucleus. This hidden treasure was forged billions of years ago in distant stellar furnaces. Long before Earth was formed. It’s what powers the stars. Wresting this knowledge from nature is a cosmic rite of passage. The beings of any possible world clever enough to travel this deep into nature’s labyrinth better take care. The secret of starlight is nothing to fool with. Like fire, it can bring a civilization to life and it can burn it to the ground.

What is an atom?

What are atoms made of? How are they joined together? How could something as small as an atom contain so much power? Where do atoms come from?

The same place we do. When we seek the origin of atoms, we are searching for our beginnings. This quest takes us to the depths of space and time.

Long ago, before there was an Earth, there was a wisp of cold thin gas. The gas was made of the simplest atoms. And they were gravitationally attracted to one another. So, the cloud grew. The atoms contained small but heavy particles in their nuclei. The hydrogen had protons, the helium had neutrons as well. They both had a skittering veil of electrons in orbit around them. The atoms in the interior of the cloud moved ever faster as gravity pulled them ever closer together. Until the whole thing collapsed in on itself. This collapse raised the temperature so high, that the cloud became a natural fusion reactor.

In other words, a star.

Atoms operating according to the laws of physics met and fused in the unbroken darkness.

And then there was light.

In this froth of elementary particles, the nucleus of one of the atoms, a helium atom, was formed.

After billions of years, the star is now elderly. Having converted all of its available hydrogen fuel to helium. Now that it’s time for the star to die, it resumes the turning inward of its infancy. Our helium atom joined with two others to become one of our heroes, a carbon atom. That’s what happens in the hearts of stars. Soon, our carbon atom will tumble out of this red giant star into the interstellar ocean of space.

Meanwhile, in another part of the galaxy. Similar processes were unfolding as stars were born and died. The other atom of our tale was formed in the heart of this dying star. In the catastrophic process of going supernova, 226 protons and neutrons became fused to a carbon atom. Turning it into a uranium atom.

As chance would have it, after wandering the vast Milky Way galaxy, our two atoms both happened on the fiery birth of a small solar system.

Ours.

Our carbon atom has travelled far to become part of a small planet. After billions of years, it joined an extremely complex molecule, which has the peculiar property of making virtually identical copies of itself. The carbon atom plays its tiny role in the origin of life. Through all its incarnations, our carbon atom has had no self-awareness. No free will. It is merely an extremely minor cog in some vast cosmic machinery, working in accord with the laws of nature.

And that other atom? The uranium atom made in the supernova? What has become of it?

Our world was born in fire. And this tiny atom was drawn to it. Maybe it rode the explosive wave of a supernova. Or perhaps, it was attracted by the gravity of our sun and pulled down deeper and deeper into the interior, which was even more of a hell.

The Earth’s surface soon cooled, but the interior remained molten. The magma slowly circulating and our uranium atom found itself carried over the ages, from the deep interior, back up to the surface. Despite the high temperatures and pressures deep within the Earth, our atom’s integrity was never threatened. Atoms are small, old, hard and durable.

Everything is made of atoms, including us

Until the last years of the 19th century, we didn’t know about the frenzied activity inside the atom. And this is where our two atoms from opposite ends of the Milky Way galaxy finally met. It happened in Paris.

Our carbon atom became part of the retina of one of the world’s greatest scientists. This was just a few years after the discovery of X-rays. 

Marie Curie and her husband and research partner, Pierre, wanted to know how a piece of matter could make it possible to see through skin and even walls. The knowledge that there were rare places in the world where rocks, rich in uranium, possess these strange properties inspired Marie on her scientific quest.

The dull brown ore, still mixed with pine needles, came from the part of Eastern Europe that is now the Czech Republic. But this material was very rare. And even to distil a tiny amount of it required the most lengthy and labour-intensive efforts.

We lived in our single occupation, as in a dream.

Marie Curie

They worked under the worst possible conditions to purify the ore into a mineral called pitchblende, which was 50 to 80% uranium. This was quite an achievement, but Marie and Pierre were hunting for something far more rare. It took them three years to process tons of ore. To isolate a mere tenth of a gram of a substance she named radium.

Marie and Pierre had discovered a completely new element.

The Curies showed that the radium was entirely unaffected by extreme temperatures. That was strange. Most things subjected to such intense heat would change drastically. And, there was something else. It spontaneously emitted energy. Not through chemical reactions, but through some unknown mechanism. Marie Curie called this new phenomenon “radioactivity”. She and Pierre calculated the energy that spontaneously flowed from a lump of radium would be much greater than burning the same amount of coal. Radioactivity, to their astonishment, was millions of times more potent than chemical energy – the difference between liberating the energy that resides in molecules and the far greater power stored deeper down.

Between Marie, Pierre, little Irene and the man she would later marry, the family would win five Nobel prizes in science.

The bottles, tubes and flasks of pitchblende that they had refined, left a residue of radium particles. They were so potent, that they lit up the lab at night. As Marie wrote years later, “They were like Earthly stars, these glowing tubes in that poor rough shack.” Marie leapt to the correct conclusion that the luminescence was due to something happening inside the nuclei of radioactive atoms.

A World Set Free

For thousands of years, it had been thought that atoms were the smallest unit of matter. Curie’s earthly stars were evidence that within the atom was a possible world where even smaller particles were interacting. A hundred years after this magical night, Marie Curie’s cookbooks still glowed with the exquisite radioactivity she had discovered.

But it took a little time for the darker implications of this deeper understanding of nature to dawn in the mind of a visionary named H.G. Wells.

A writer, H. G. Wells was a genius at turning the new revelations of science into stories that captivated the world. And foreseeing as no one else, their gravest consequences. The writer H.G. Wells, who first imagined time machines and alien invasions had a nightmare of a future world where atoms were weaponized. In his book called The World Set Free written in 1913, he coined the phrase atomic bombs. And loosed them on helpless civilian populations. He set his vision of a nuclear war between England and Germany in the impossibly distant future of the 1950’s.

In 1933, the Hungarian physicist, Leo Szilard, was contemplating becoming a biologist. He read Wells’ novel and it started him thinking. Szilard knew that atoms are made of protons and neutrons on the inside. And a skittering veil of electrons on the outside. Suddenly, waiting for a traffic light to change at an intersection in London, he was struck by the thought, that if he could find a sufficiently large amount of an element that would emit two neutrons when it absorbed one, it would sustain a nuclear chain reaction. Two would produce four, four would produce eight and so forth. Until enormous amounts of energy in the nucleus itself could be liberated. Not a chemical reaction, but a nuclear one.

This was the moment our world changed. Leo Szilard also knew the power of exponentials and if a neutron chain reaction could be triggered down there in the world of the atom’s nucleus, then something like Wells’ imaginary atomic bomb might be possible. He shuddered at the thought of this destructive capability.

But this was just the latest development on a continuum of violence that began long long before.

War, a History

50,000 years ago, all humans were roving bands of hunter-gatherers. They communicated over limited areas by calling to one another. That is, at the speed of sound. Around 1,235 kilometres per hour. But over longer distances, they could communicate only as fast as they could run.

Around 12,000 years ago, about the same time as the invention of agriculture, they developed the power to kill at a longer distance. The kill radius expanded to the arc of an arrow launched by a bow. And they could kill one person with a single arrow. Our ancestors were not particularly warlike because there were so few people and so much room back then that moving on was preferable to armed conflict. Their weapons were used almost entirely for hunting. Their identification horizon was likely small. Only with the other members of their band of 50 or 100 people. But their time horizon took a giant leap. They worked long and hard planting crops in the here and now so that several months later, they could harvest them. They postponed present gratification for later advantage. They began to plan for the future.

By about 2,500 years ago, there was a new kind of war. The conquered territories of Alexander stretched from Macedonia to the Indus Valley. There were now many on planet Earth who owed allegiance to groups composed of millions. Over long distances, the maximum speed of both communication and transportation was the speed of the sail and the horse. Archidamus III, King of Sparta, was famed for his unflinching courage. He relished taking part in hand-to-hand combat with the enemy. It is said that when he first saw a projectile hurled by a Balista, he cried out in anguish. “Oh, Hercules! The valour of man is lost!”. Both the kill range and the kill ratio had increased exponentially. Now, ten corpses lay where one would have been. And the soldier who released the lever on the siege engine never even saw their faces. He remained far removed from the carnage on the other side of the city wall.

Today, the maximum speed of transportation is the escape velocity from Earth. 40,000 kilometres per hour. The speed of communication is the speed of light. The identification horizons have also expanded enormously. For some, it’s a billion or more. For others, it’s our whole species. And for a few, it’s all living things. The kill radius, in the worst-case scenario, is now our global civilization.

How did we get here?

Well, it was the result of a deadly embrace between science and state. And there was one scientist for whom no amount of destructive power was enough.

It’s hard to pinpoint the precise moment when the first nuclear war began. Some might trace it back to that arrow sailing over the treetops. Others might say it started much later, with three messages.

In 1939 on Adolf Hitler’s birthday, one of his brightest young scientists, Paul Harteck, had a special gift in mind for his Führer. Harteck wrote a letter to the Nazi war office, he wished to inform them that the latest developments in nuclear physics would make it possible to produce an explosive exponentially more powerful than conventional weapons. He was trying to give an atomic bomb to Adolf Hitler. But Hitler would never get his hands on a nuclear weapon, he had murdered, imprisoned or exiled many of the great physicists in his territories. Those who happened to be Jews or liberals and many who were both.

Exactly a month before the war began, Leo Szilard made a pilgrimage to the house Albert Einstein was renting on Long Island in New York. The physicist who usually chauffeured Leo Szilard on trips out of Manhattan was unavailable that August day in 1939. So, Szilard enlisted the services of a fellow Hungarian emigrate, a young scientist named Edward Teller. Persecution in Budapest sent Teller and his family to take refuge in Munich, where he lost his right foot in a traffic accident. In the early 1930s, Teller and his family were forced to flee once again.

Just as Harteck felt it his duty to inform Hitler. Szilard wanted the US President, Franklin Roosevelt, to know the awesome power of such a weapon. There was no scientist on Earth whose prestige and influence was comparable to Einstein’s. Einstein’s nightmare was imagining Hitler with a nuclear weapon at his disposal. But what would be the long-term consequences of this dangerous new knowledge? Which, once unleashed, could never be taken back. Einstein would take no role in the U.S. effort to build the atomic bomb, which became known as “The Manhattan Project.” But he did alert the then-U.S. President. Franklin Roosevelt, to the potential use of atomic nuclei in warfare. After the war was over, he told a reporter that if he had known the Germans would fail in developing an atomic bomb, he never would have signed the letter. But Edward Teller had no such ambivalence. He couldn’t wait to get started on weaponizing the atom.

The Russian physicist, G.N. Flyorov had tried for years to alert his leader, Joseph Stalin, to the possible military applications of a nuclear chain reaction. However, the Soviet Union was under siege by the Germans. And an atom bomb project was likely to take years to complete. With their backs against the wall, it seemed too impractical to even think about. In 1942, Flyorov had published a scientific paper on nuclear physics. Now, he was excited to see what the eminent physicists in Europe and the United States had to say about it. Flyorov was puzzled. None of the physicists of the International Scientific Community thought his paper worthy of comment.

At first, he was hurt, but then he realized what was really happening. American and German scientific journals were being scrubbed of any nuclear physics papers as both nations secretly worked on building the bomb. It was this absence of published data, the dogs that did not bark, that moved Flyorov to re-double his efforts to convince Stalin to start his own nuclear weapons program.

In all three cases, it was the scientists, not the generals or the arms dealers, who informed their leaders that a huge increase in the kill ratio was possible.

The Manhattan Project

The U.S. Department of War chose the remote location of Los Alamos, New Mexico as the headquarters for the atomic bomb research project. It had been recommended by the project’s director, physicist J. Robert Oppenheimer, who had recuperated there from an illness as a teenager.

But for Edward Teller, an atomic bomb wasn’t big enough. He dreamed of even greater lethality. A weapon in which the atomic bomb was nothing more than a match to light a fuse to the nucleus. A thermal nuclear weapon. What Teller affectionately called, the super.

If Edward Teller had a polar opposite in the scientific community, it would have been Joseph Rotblat. Rotblat was born in Warsaw to a wealthy family, who like Teller’s, had lost everything. In the summer of 1939, just before the Nazis invaded, he was invited to England to take a research position at the University of Liverpool. At the last minute before his departure, his beloved wife, Tola had an emergency appendectomy. She was forced to remain behind until she was well enough to travel. Tola insisted that Joseph go on ahead to prepare their new home. It would just be a matter of weeks, she told him.

The challenge to the Manhattan Project team was to find a chemical fuse that would light the nuclear chain reaction, first imagined by Leo Szilard in London. The scientists and engineers told themselves that they would be averting a grave danger by building a bomb of unprecedented destructive power. Their government could be trusted. They would never use such a weapon in an act of aggression, not like those other governments. These atomic scientists were the first to see building nuclear weapons as a deterrent to using them. The fear of Hitler with an atomic bomb was the driving rationale for the Manhattan Project.

And yet, when Germany surrendered and Hitler was no more, of the thousands of scientists who worked on the bomb, only one resigned. It was Joe Rotblat. In the years that followed, whenever he was asked about his decision, he always rejected any suggestion that he had done so out of moral superiority. He would just smile and say, the truth was that he desperately missed his wife, who had been prevented from leaving Warsaw and lost to him in the chaos of the war. With its end in Europe came his chance to go and search for her. But, he never found her. Except as a name on a list of the dead. Tola had perished in the Holocaust, exterminated at the Belzec concentration camp. Although he lived another 60 years, Rotblat never remarried.

Of the three nations that pursued wartime research into building the bomb, only the U.S. succeeded before the war’s end. And historians believe that was because America had taken in so many immigrants. Of the leading figures in the Manhattan Project, only two were native-born. Only one got his PhD in the U.S. Atomic bombs were dropped on the Japanese cities of Hiroshima and Nagasaki, ending the Second World War. Two months later, President Truman invited Oppenheimer for congratulations in the Oval Office. But to Truman’s dismay, Oppenheimer was in no mood to celebrate.

Less than four years later, the Russians exploded their atomic bomb. And shortly after, both nations went on to create thermonuclear hydrogen bombs. The nuclear arms race begun by those three letters from scientists was off to a terrifying start. After the war, Teller’s dreams of greater and greater killing power were to come true. In the early 1950s, when the Communist witch hunts began in the United States, he was perfectly happy to hint that Robert Oppenheimer, his former boss, who had brilliantly run the Manhattan Project, should be stripped of his security clearance, thereby ruining Oppenheimer’s career.

Despite dramatic reductions in nuclear arsenals, the spectre of nuclear war haunts us still. How can we sleep so soundly in the shadow of a smoking volcano?

A Tale of Two Atoms

We’re back on the trail of one of our two atoms. The uranium atom. A uranium atom is inherently unstable. Sooner or later, it decays. A particle from its nucleus breaks away, transforming the uranium atom into an entirely different element. Thorium. Subatomic particles move like bullets through the fine structure of life. Shearing electrons from their molecules. This is how ionizing radiation affects living things. Those chromosomes never had a chance. This is why atomic weapons are so much more dangerous than conventional ones. Ionizing radiation is all around us and even inside us. At low levels, it poses no threat. But at higher levels, it’s a different story.

In the near term, exposure to lethal levels of radiation can cause a runaway reaction in the cell that makes it multiply exponentially. Cancer. But its power to harm can also echo down the corridors of time. When radiation tears into the chromosomes of the butterfly, it leaves a trail of destruction in its wake that changes the destiny of the butterfly’s unborn offspring. A mutation in its genes We have a lot in common with butterflies. Any change in the DNA architecture will be copied over and over again in succeeding generations. The damage is passed on. Vandalizing our future.

We are made of atoms that were born in stars thousands of light years away in space and billions of years ago in time. The search for our origins has carried us far from our epoch in our world. We are star-stuff, deeply connected with the rest of the universe. The matter we are made of was generated in cosmic fire. And now, we, ambulatory collections of seven billion billion billion atoms intricately assembled over aeons have devised a means to tap that cosmic fire, hidden in the heart of matter.

We cannot unlearn this knowledge. And tragically, insanity runs in our family.

The letters that the scientists wrote to begin this nightmare were followed by another. This one, a letter to the planet, stating that this new understanding of physics demanded a new way of thinking:

Shall we choose death because we cannot forget our quarrels? We appeal as human beings to human beings, remember your humanity and forget the rest.

And what of our other atom? The carbon atom? It’s inside one of you.

I like to think that if I fell towards a black hole, the truth would be revealed at the event horizon and I’d scream into the intercom:

“My gawd, it’s full of nipples!”

And then relativity would dilate time such that my message would take a century to be fully decoded by the rest of humanity. And upon finally getting the last byte of info, they’d glance at each other, smack their foreheads, and say:

“Truly, he was completely cuckoo.”

But they’d be wrong because the black hole would genuinely have been full of nipples: so many that critical mass was exceeded and not only mammals but even light itself couldn’t escape nipple bondage.

Feeling, in mammals at least, is mainly controlled by lower, primitive, and more ancient parts of the brain. And thinking, by the higher, more recently evolved outer layers. A rudimentary ability to think was superimposed on the pre-existing programmed savage behaviours. This is the evolutionary baggage we carry with us into the schoolyard, into the marriage, into the voting booth, into the lynch mob, and onto the battlefield.

So, what does that tell us about our future? Will it be nothing more than a series of callous conquests – dreary repetitions of our past – with no escape for our children?

I know a story that gives me hope. A tale of a man whom I deem as the greatest conqueror who ever lived. To date, he remains one of, if not the only, powerful leader in world history who tried to conquer by way of morality. He’s the only person that I know of who lived on both extremes of the good-evil spectrum; From blood-thirsty to tranquil. His life’s saga means we can change:

About 2,200 years ago, much of the world was in the grip of absolute rulers. Their armies rampaged across the planet, bringing torture, rape, murder, and mass enslavement wherever they went.

A young man came out of an obscure backwater called Macedonia and, in less than a decade, carved out an empire that stretched from the Adriatic to beyond the Indus River in India. Along the way, Alexander the Great crushed the implacable Persian army.
At about the same time, King Chandragupta conquered all of northern India.

King Chandragupta’s son, Bindusara, assumed the throne after his death. As Bindusara’s own death approached, he intended to bequeath his empire to a favoured heir.

Legend has it that another son, one who had been rejected by Bindusara, was so ruthless in his quest for power that he murdered every one of his 99 half-brothers and in a fiery pit of coal, he burned alive the chosen successor.

Dressed in the finery that only an emperor was entitled to wear, the hated son stood before his dying father and declared contemptuously, “I am your successor now!”

This was Ashoka … and he was just getting started.

In the 2nd century BCE, the Indian emperor Ashoka initiated a reign of terror known for its new heights of sadism and cruelty. When Ashoka’s ministers baulked at his command to cut down all the fruit trees surrounding his palace, Ashoka said, “Fine, we’ll cut off your heads instead.”

His fiendishness knew no bounds.

Ashoka built a magnificent palace for his unsuspecting victims. They did not know until it was too late that deep inside the palace were torture rooms designed to inflict the five most painful ways to die. It came to be known as Ashoka’s Hell.

But that was not Ashoka’s greatest atrocity.

He now set out to complete the conquest of India that his grandfather had begun.

The nation of Kalinga, to the south, knew no peace could be made with such a madman. They courageously stood their ground as Ashoka’s army besieged the city. When they could bear no more, Ashoka sent his troops in for the kill.

As Ashoka surveyed his triumph, there was one vagabond who dared to approach him, saying “Mighty King, you who are so powerful you can take hundreds of thousands of lives at your whim,” bringing forth a toddler’s corpse from under his robes, he presented it to Ashoka, “Show me how powerful you really are. Give back but one life to this dead child.”

Who was this fearless beggar who dared to confront the vile Ashoka with his crimes? His exact identity is lost to us, but we know that he was a disciple of Buddha, a little-known philosopher who had lived almost 200 years before. Buddha preached nonviolence, awareness, and compassion. His followers renounced wealth to wander the earth spreading Buddha’s teachings by their example. This monk was one of them. And with his courage and wisdom, he found the heart in a heartless man.

Ashoka was never the same again.

He erected a pillar, one of many, on the site of his greatest crime. One of the first edicts of Ashoka was engraved on it: “All are my children. I desire for my children their welfare and happiness, and this I desire for all.”

It wasn’t that Ashoka was violating the laws of kin selection – the evolutionary strategy that favours the reproductive success of an organism’s relatives, even at a fatal cost to other distantly related species’ lives – It was that his definition of who was kin to him had expanded to include everyone.

Ashoka would govern India for another 30 years, and he used that time to:

  • Build schools, universities, hospitals, and even hospices.
  • He introduced women’s education and saw no reason why they could not be ordained as monks.
  • He banned the rituals of animal sacrifice and hunting for sport.
  • He established veterinary hospitals throughout India, and he counselled his citizens to be kind to animals.
  • Ashoka saw to it that wells were dug to bring water to the towns and villages.
  • He planted trees and built shelters along the roads of India so that the traveller would always feel welcome and animals would have the mercy of shade.
  • Ashoka signed peace treaties with the small neighbouring countries that had once trembled at the mention of his name.
  • He instituted free health care for all and made sure that the medicines of the time were available to everyone.
  • He decreed that all religions be honoured equally.
  • He ordered a judicial review of those wrongfully imprisoned or harshly treated.
  • Ashoka sent Buddhist emissaries to the Middle East to teach, compassion, mercy, humility, and the love of peace; transforming Buddhism from a small philosophical sect into a global religion.

The temples and palaces of Ashoka’s reign, and most of the pillars he erected throughout India, were destroyed by generations of religious fanatics, outraged by what they considered to be his godlessness. But despite their best efforts, his legacy lives on:

  • Buddhism became one of the world’s most influential religious philosophies.
  • Ashoka’s edicts were carved in stone in Aramaic, the language of Jesus, a couple of hundred years before his birth.
This is one of the few temples of Ashoka that survived the vandals, a cave in the hills of Barabar in India. It’s famous for its echo. Inside the temple, the sound waves of your voice ricochet off the walls until they’re completely absorbed by the surfaces of objects, and there’s nothing left at all.

But Ashoka’s dream is different. Its echo grows louder and louder with time.

Who are we? You tell me.

Rupert: The whole idea is ridiculous! The very assumption that a plane is more complex than Elon Musk’s rockets. That’s just absurd and objectively not the case.

Wambua: …What!?

Rupert: Part of the reason is that Elon Musk’s rockets can not only climb up the atmosphere, but they can also come down to a predesignated spot in a desert or the ocean, without falling apart and becoming scrap metal or coral-reef anchoring, respectively. By doing so, Elon can theoretically save over 99% of the total cost of launching rockets.

Wambua: No, I mean, how come you are talking? Lizards don’t talk.

Rupert: And yet we are having a conversation. A fixed-wing plane is designed to generate lift in the easiest way possible, and then stay up in the air with the bare minimum struggle. This is because its aerodynamic profile makes it work with the atmosphere, rather than against it. Stick a plane in a wind tunnel, turn up the wind, and the plane will go up, lifted by the air itself. In fact, so aerodynamic are planes, that most can glide over 150 kilometres horizontally if their engines shut down at a 10 km altitude. For most planes, engine shut-off is rarely a fatal situation. In a sense, a fixed-wing craft is like a well-balanced kite in the wind: making it stay up in the air is easy. In fact, the most complicated part of aviation in fixed-wing planes is balancing the rolls, pitches and yaws of the body to maintain a stable flight. But staying up in the air? Easy peasy. Some things are not meant to defy sense.

Wambua: A talking iguana does. You peed in my stash of pot again, didn’t you? I’m hallucinating from inhaling your pee, aren’t I?

Rupert: A rocket, is a completely different animal. First, a rocket works against the atmosphere: the bulk of its fuel at launch is burnt just to crawl through the lower atmosphere. So, while the lower atmosphere lifts up an aeroplane, it creates a massive drag for a rocket. And the fuel needed to break through this barrier is often in the hundreds of tonnes, for even modest payloads. Secondly, while an aeroplane can glide idyllically with dead engines, a rocket with dead engines drops to the ground like dead weight. Engine shut-off in a plane is an inconvenience. In a rocket, it is either part of the flight profile or the beginning of a massive clusterfuck, that ends in a guaranteed total loss of hardware plus any payload: human or otherwise.

Wambua: Oddly, you make a lot of sense. Except for the whole talking lizard thing.

Rupert: Back to SpaceX, and the inimitable magic of landing a rocket safely back on a pad. Somewhere in the huge labs of SpaceX, a certain flight engineer once stumbled across a literal epiphany. He realized that, if the fuel for SpaceX rockets could be increased by a mere 3-4% above what was then deemed enough for a one-way flight, there theoretically would be enough fuel left in the rockets to check their descent and manoeuvre them to a safe landing, instead of the traditional splash landing in the Atlantic that used to scrap everything. So, a whole new family of tech was built from the ground up. The RCS (Reaction Control System) thrusters had to be reconfigured: given enough Delta-V to flip over the entire rocket at MECO (Main Engine Cut Off), often at flight apogee, to obtain a retrograde form. That is, the bottom part of the rocket, where the thrusters are, would be turned around to face the direction of rocket propagation, while the tip of the rocket now faced the direction from which the rocket was coming.

Wambua: Look, if we are going to effectively communicate, I have to get used to this … this whole madness. Tell me what it feels like to be a lizard.

Rupert: Feels like a human being without the cojones. So, at apogee, most SpaceX rockets are moving horizontally at about 7,000 km/hr. That’s about Mach 6. Hypersonic. When the flight path curves down, back towards the earth, this speed increases even further. Left unchecked, this increasing speed would result in irreparable damage to the rocket as it transited from the stratosphere to the troposphere. So, the on-deck computers have to fire some of the engines at the base to shave off some of the velocity. This firing is sophisticated in terms of timing and the retrograde thrust created. Too early, and the rocket exhausts its fuel, guaranteeing a crash landing. Too late, and aerodynamic buffeting tears the rocket apart. Too much thrust and the rocket not only stops mid-air but also reverses and rapidly climbs upwards, again: the fuel-thrust ratio in the Merlin-D engines is simply insane. The sweet spot hence is somewhere in between, and before SpaceX engineers got it just right, they blew quite a few rockets. The last major challenge in getting the rocket on the drone ship, downrange in the Atlantic, is twofold. First, the falling rocket and the drone ship have to rendezvous at the exact same point, or the rocket falls into the water and sinks. Grid fins and extra-articulate thrust gimbles kick in here, correcting flight path at microsecond timeframes. The tech to actuate these had to be built from the ground up. Secondly, at the last moment, just before the rocket connects with the drone ship’s surface, the rocket’s main thrusters fire for one last time, in what’s known in SpaceX as a “Hoverslam manoeuvre”, or as the “Suicide burn”. This last burn kills off all remaining downward velocities in the rocket, ensuring a soft impact with the landing legs. Done wrong, the rocket crashes on the deck, incinerating everything in sight.

Wambua: WAIT, wait. You don’t have cojones?

Rupert: I don’t. Back to rockets, it goes without saying that, before all the above manoeuvres were perfected, SpaceX engineers suffered a lot of cold sweats, and straight-up horror when any of a million variables went nuts: the weather around, the tolerance of some rocket parts giving up at the wrong time, some programming code hiccupping in the thick of things, etc. And in almost every single case of such misadventures, the cost would be the same: total, fiery loss of the rocket. Millions of dollars up in literal flames. By comparison, aviation, especially fixed-wing planes, is like a walk in the park.

Wambua: But if you don’t have cojones…

Rupert: I’m female, yeah. Stop looking at me like that, you twisted being.

Once there was a world not so very different from our own.

There were occasional natural catastrophes, massive volcanic eruptions and, every once in a while, an asteroid would come barrelling out of the blue to do some damage.

But for the first billion years or so, it would’ve seemed like a paradise, the very personification of its name: The Goddess of Beauty.

This is what we think the planet Venus might have looked like when our solar system was young.

Then things started to go horribly wrong.

The planet Venus, which once may have seemed like a heaven, turned into a kind of hell. The difference between the two can be a delicate balance, far more delicate than you might imagine.

Once things began to unravel, there was no way back.

This is what Venus, our nearest planetary neighbour, looks like today.

Venus’s oceans are long gone. The surface is hotter than a broiling oven, hot enough to melt lead. Why? You might think it’s because Venus is 30% closer to the Sun than the Earth is, but that’s not the reason. Venus is completely covered by clouds of carbon dioxide and sulphuric acid; the latter keeps almost all the sunlight from reaching the surface. That ought to make Venus much colder than Earth.

So why is Venus scorching hot? It’s because the small amount of sunlight that trickles in through the clouds to reach the surface can’t get back out again. The flow of energy is blocked by the dense atmosphere of carbon dioxide. That carbon dioxide gas – or CO2 for short – acts like a smothering blanket to keep the heat in.

No one is burning coal or driving big petroleum guzzlers on Venus. Nature can destroy an environment without any help from intelligent life.

Venus is in the grip of a runaway greenhouse effect.

In 1982, the scientists and engineers of what was then the Soviet Union successfully landed Venera 13 on Venus. They managed to keep it refrigerated for over two hours, so it could photograph its surroundings and transmit the images back to Earth before the onboard electronics were fried.

This is what Venera 13 saw.

Venus and Earth started out with about the same amount of carbon, but the two worlds were propelled along radically different paths, and carbon was the decisive element in both stories. On Venus, it’s almost all in the form of gas – carbon dioxide – in the atmosphere.

Most of the carbon on Earth has been stored for aeons in solid vaults of carbonate rock, like limestone and chalk. How? Volcanoes supply carbon dioxide to the atmosphere, and the oceans slowly absorb it. Working over the course of millions of years, microscopic algae harvest the carbon dioxide and turn it into tiny shells. They accumulate in thick deposits of chalk, or limestone. Other marine creatures take in carbon dioxide to build enormous coral reefs. And the oceans convert dissolved CO2 into limestone even without any help from life. As a result, only a trace amount is left as a gas in Earth’s atmosphere. Not even four-hundredths of one per cent.

Think of it – about four molecules out of every ten thousand. And yet, it makes the critical difference between a barren wasteland and a garden of life on Earth. With no CO2 at all, the Earth would be frozen. And with about twice as many, we’re still talking about only six molecules out of ten thousand, things would get uncomfortably hot and cause us some serious problems.

But never as hot as Venus; not even close. That planet lost its ocean to space billions of years ago. Without an ocean, it had no way to capture CO2 from the atmosphere and store it as a mineral. The CO2 from erupting volcanoes just continued to build up.

Today, that atmosphere is 90 times heavier than ours. Almost all of it is heat-trapping carbon dioxide. That’s why Venus is such a ferocious inferno – so hostile to life.

Earth, in stunning contrast to Venus, is alive. It breathes, but very slowly. A single breath takes a whole year.

The forests contain most of Earth’s life, and most forests are in the Northern Hemisphere.

When spring comes to the north, the forests inhale carbon dioxide from the air and grow, turning the land green. The amount of CO2 in the atmosphere goes down. When fall comes and the plants drop their leaves, they decay, exhaling the carbon dioxide back into the atmosphere. The same thing happens in the Southern Hemisphere at the opposite time of the year. But the Southern Hemisphere is mostly ocean. So it’s the forests of the north that control the annual changes in global CO2.

Earth has been breathing like this for tens of millions of years. But nobody noticed until 1958 when an oceanographer named Charles David Keeling devised a way to accurately measure the amount of carbon dioxide in the atmosphere. Keeling discovered the Earth’s exquisite respiration. But he also discovered something shocking – a rapid rise, unprecedented in human history, in the overall level of CO2, one that has continued ever since.

It’s a striking departure from the CO2 levels that prevailed during the rise of agriculture and civilization. In fact, the Earth has seen nothing like it for millions of years.

How can we be so sure? The evidence is written in water.

The Earth keeps a detailed diary written in the snows of yesteryear. Climate scientists have drilled ice cores from the depths of glaciers in Greenland and Antarctica. The ice layers have ancient air trapped inside them. We can read the unbroken record of Earth’s atmosphere that extends back over the last 800,000 years. In all that time, the amount of carbon dioxide in the air never rose above three-hundredths of one percent. That is, until the turn of the 20th century. And it’s been going up steadily and rapidly ever since. It’s now more than 40% higher than before the Industrial Revolution. By burning coal, oil and gas, our civilization is exhaling carbon dioxide much faster than Earth can absorb it. So CO2 is building up in the atmosphere. The planet is heating up.

Every warm object radiates a kind of light we can’t see with the naked eye—thermal infrared light. We all glow with invisible heat radiation, even in the dark.

This is what Earth looks like in the infrared. You’re seeing the planet’s own body heat.

Incoming light from the Sun hits the surface. The Earth absorbs much of that energy, which heats the planet up and makes the surface glow in infrared light. But the carbon dioxide in the atmosphere absorbs most of that outgoing heat radiation, sending much of it right back to the surface. This makes the planet even warmer.

This is all there is to the greenhouse effect. It’s basic physics, just bookkeeping of the energy flow. There’s nothing controversial about it.

If we didn’t have any carbon dioxide in our atmosphere, the Earth would just be a great big snowball, and we wouldn’t be here. So, a little greenhouse effect is a good thing. But a big one can destabilize the climate and wreck our way of life.

All right but how do we know that we’re the problem? Maybe the Earth itself is causing the rise in CO2. Maybe it has nothing to do with the coal and oil we burn. Maybe it’s those damn volcanoes. They’ve already doomed the planet Venus anyway.

Every few years, Mount Etna, in Sicily, blows its stack. Each new eruption sends millions of tonnes of CO2 into the atmosphere.

Now, combine that with the output of all the other volcanic activity on the planet. Let’s take the largest scientific estimate – about 500 million tonnes of volcanic CO2 entering the atmosphere every year. Sounds like a lot, right? But that’s not even two percent of the 36 billion tonnes of CO2 that our civilization is cranking out every year. And, funny thing, the measured increase in CO2 in the atmosphere tallies with the known amount we’re dumping thereby burning coal, oil and gas. Volcanic CO2 has a distinct signature – it’s slightly heavier than the kind produced by burning fossil fuels. We can tell the difference between the two when we examine them at the atomic level. It’s clear that the increased CO2 in the air is not from volcanoes. What’s more, the observed warming is as much as predicted from the measured increase in carbon dioxide.

It’s a pretty tight case. Our fingerprints are all over this one.

How much is 36 billion tonnes of CO2 per year? If you compressed it into solid form, it would occupy about the same volume as Mount Kilimanjaro. And we’re adding that much CO2 to the air every year, relentlessly, year after year.

Mount Kilimanjaro in Tanzania, the world’s tallest free-standing mountain. With a bit of tweaking, it gives a scale of just how much CO2 we are dumping into our atmosphere.

Unlucky for us, the main waste product of our civilization is not just any substance. It happens to be the chief climate-regulating gas of our global thermostat, year in, year out. Too bad CO2 is an invisible gas. Maybe if we could see it, if our eyes were sensitive to CO2 – and perhaps there are such beings in the cosmos – if we could see all that carbon dioxide, then we would overcome the denial and grasp the magnitude of our impact on the atmosphere.

But the evidence that the world is getting warmer is all around us. For starters, let’s just check the thermometers; Weather stations around the world have been keeping reliable temperature records since the 1880s, and NASA has used the data to compile a map tracking the average temperatures around the world through time.

Yellow means warmer temperatures than the average, for any region in the 1880s. Orange means hot. And red means hotter. The world is warmer than it was in the 19th century.

As far back as 1896, Swedish scientist Svante Arrhenius calculated that doubling the amount of CO2 in the atmosphere would melt the Arctic ice. In the 1930s, the American physicist E.O. Hulburt, at the Naval Research Laboratory, confirmed that result. So far, it was still just theoretical. But then, the English engineer Guy Callendar assembled the evidence to show that both the CO2 and the average global temperature were actually increasing.

Since Dr Frank Baxter uttered these words in 1958, we’ve laden our atmosphere with an additional 1.36 trillion tonnes of CO2.

If we don’t change our ways, what will the planet be like in our children’s future? Based on scientific projections, if we just keep on doing business as usual, our kids are in for a rough ride:

  • killer heat waves
  • record droughts
  • terminal tropical and highly infectious diseases in the far reaches of the globe
  • mass extinction of species
  • rising sea levels and sinking coastal cities
  • mass death of coral reefs by ocean warming
  • increase in the intensity of catastrophic storms
  • runaway wildfires

We inherited a bountiful world made possible by a relatively stable climate. Agriculture and civilization flourished for thousands of years. And now, our carelessness and greed put all of that at risk.

Okay, so if we scientists are so good at making these dire, long-term predictions about the climate, how come they’re so lousy at predicting the weather? Besides, this year, we are having a colder season in my country. For all us scientists know, we could be in for global cooling.

Here’s the difference between weather and climate: Weather is what the atmosphere does in the short term – hour to hour, day to day. Weather is chaotic, which means that even a microscopic disturbance can lead to large-scale changes. That’s why those ten-day weather forecasts are useless. A butterfly flaps its wings in Kinshasa, and six weeks later, your outdoor wedding in Gaborone is ruined.

Climate is the long-term average of the weather, over several years. It’s shaped by global forces that alter the energy balance in the atmosphere, such as changes in the Sun, the tilt of the Earth’s axis, the amount of sunlight the Earth reflects back to space and the concentration of greenhouse gases in the air. A change in any of them affects the climate in broadly predictable ways.

Climate has changed many times in the long history of the Earth but always in response to a global force. The strongest force driving climate change right now is the increasing CO2 from the burning of fossil fuels, which is trapping more heat from the Sun. All that additional energy has to go somewhere. Some of it warms the air. Most of it ends up in the oceans. All over the world, the oceans are getting warmer. It’s most obvious in the Arctic Ocean and the lands that surround it.

Okay, so we’re losing the summer sea ice in a place where hardly anyone ever goes. What do I care if there’s no ice around the North Pole?

Ice is the brightest natural surface on Earth, and open ocean water is the darkest. Ice reflects incoming sunlight back into space. Water absorbs sunlight and gets warmer, which melts even more ice, which exposes still more ocean surface to absorb even more sunlight. This is what we call a positive feedback loop. It’s one of many natural mechanisms that magnify any warming caused by CO2 alone.

Collapsed block of ice-rich permafrost along Drew Point, Alaska, at the edge of the Beaufort Sea. In the 1950s, the shoreline was two kilometres further out, and it was breaking off at a rate of about 6 metres per year. Now it’s been eaten away at about 20 metres per year.

The Arctic Ocean is warming at an increasing rate. So it’s ice-free during more of the year. That leaves the shore more exposed to erosion from storms, which are also getting more powerful, another effect of climate change.

The northern reaches of Alaska, Siberia and Canada are mostly permafrost, ground that has been frozen year-round for millennia. It contains lots of organic matter, old leaves and roots from plants that grew thousands of years ago. Because the Arctic regions are warming faster than anywhere else on Earth, the permafrost is thawing and its contents are rotting, just like when you unplug the freezer. The thawing permafrost is releasing carbon dioxide and methane, an even more potent greenhouse gas, into the atmosphere. This is making things even warmer, another example of a positive feedback mechanism. The world’s permafrost stores enough carbon to more than double the CO2 in the atmosphere. At the rate we’re going, global warming could release most of it before the end of the century. We might be tipping the climate past a point of no return into an unpredictable slide.

Okay, the air, the water and the land are all getting warmer, so global warming is really happening. But maybe it’s not our fault. Maybe it’s just nature. Maybe it’s the Sun.

No, it’s not the Sun. Scientists have been monitoring the Sun very closely for decades, and the solar energy output hasn’t changed. What’s more, the Earth is warming more at night than in the daytime, and more in winter than in summer. That’s exactly what we expect from greenhouse warming, but the opposite of what increased solar output would cause. It’s now clear beyond any reasonable doubt that we are changing the climate.

The Sun isn’t the problem. But it is the solution. In all its glory, the Sun pours immaculate, free energy down upon us; more than we will ever need; More solar energy falls on Earth in one hour than all the energy our civilization consumes in an entire year. The winds themselves are solar-powered because our star drives the winds and the waves. Unlike solar collectors, wind farms take up very little land, and none at all, if offshore, where the winds are strongest. If we could tap even one per cent of the available solar and wind power, we’d have enough energy to supply all our energy needs forever, and without adding any carbon to the atmosphere.

It’s not too late. There’s a future worth fighting for. How do I know? Every one of us comes from a long line of survivors. Our species is nothing if not adaptive. It was only because our ancestors learned to think long-term and act accordingly, that we’re here at all. We’ve had our backs to the wall before, and we came through to scale new heights. In fact, the most mythic human accomplishment of all came out of our darkest hour.

About 10,000 years ago, our ancestors all over the world took advantage of another form of climate change, the gentler climate of the intermission in the ice age – they invented agriculture.

They gave up the ceaseless wandering, hunting and gathering that had been their way of life for a million years or so, to settle down and produce food. They found a way to harvest ten to a hundred times more solar energy than the environment naturally provided for their ancestors. People all over the world made the difficult transition from nomadic cultures to agricultural ones that used solar energy more efficiently. It gave rise to civilization. We stand on the shoulders of those who did the hard work that such a fundamental transformation required.

Now it’s our turn.

If life ever existed on Venus, it would have had no chance to avert the hellish destiny of that world. The runaway greenhouse effect was unstoppable.

Earth is our world. And that world is now. There are no scientific or technological obstacles to protecting our world and the precious life that it supports. It all depends on what we truly value and if we can summon the will to act.

So, via the might of science, I am now able to answer the recurring question(s) that is: “Why don’t you have a girlfriend?”, “Are you gay?”, “You must have a tiny schlong”.

The answer – The Drake Equation! The Drake equation is used to estimate the number of highly evolved civilizations that might exist in our galaxy. And with a little bit of tweaking where necessary, I can use it to find out the number of potential girlfriends for me.

The equation is generally specified as:

G = R ⋅ fP ⋅ne ⋅ fl ⋅ fi ⋅ fc ⋅ L

Where:

  • G = The number of civilizations capable of interstellar communication
  • R = The rate of formation of stars capable of supporting life (stars like our Sun)
  • fP = The fraction of these stars that have planets
  • ne = The average number of planets similar to Earth per planetary system
  • fl = The fraction of the Earth-like planets supporting life of any kind
  • fi = The fraction of life-supporting planets where intelligent life develops
  • fc = The fraction of planets with intelligent life that are capable of interstellar communication (those which have electromagnetic technology like radio or TV)
  • L = The length of time such communicating civilizations survive

Using this equation Prof. Drake estimated that 10,000 communicative civilizations probabilistically exist in the Milky Way alone. Astronomers estimate that there are between 200 and 400 billion stars in the Milky Way. Let’s call it 300 billion. This makes the probability of a star chosen at random supporting life capable of interstellar communication 0.00000003%.

Another way to think about this is that this is the probability of the conditions necessary for us to communicate with an alien civilization being satisfied. These seem like slim odds at best, but the probability is positive (There is a chance!) and this approach is widely accepted by astronomers (This isn’t science fiction!). The idea that there could be 10,000 civilizations that we are capable of communicating with is very exciting indeed.

While extraterrestrial civilizations may be rare, there is something that is seemingly rarer still: A girlfriend. For me. What might the approach employed in the estimation of the number of alien civilizations tell us about the number of potential girlfriends for me? A somewhat less scientific question, I admit, but one of substantial personal importance.

The parameters are re-defined as follows with the values in brackets:

  • G = The number of potential girlfriends:

One can easily substitute boyfriends in here but as I am a heterosexual male I will focus on the search for a girlfriend.

  • R = The rate of formation of people in Kenya (i.e. population growth):

This is about 1,000,000 people per year over the last 60 years.

  • fW = The fraction of people in Kenya who are women. (0.51)

The Kenya National Bureau of Statistics puts it at just over half of the population.

  • fL = The fraction of women in Kenya who live in Nairobi. (0.09)

I would like my girlfriend to be nearby so that we can see each other. This makes it easier to get to know each other, avoids the difficulties of a long-distance relationship and saves me the bus fare.

  • fA = The fraction of the women in Nairobi who are age-appropriate. (0.19)

I am 27 years old (Thank you, I know I don’t look it). I would like my girlfriend to be near my age. I don’t want to feel older than I am by not being able to keep up with a spritely eighteen-year-old, or because I haven’t watched Purple Hearts and I don’t know who Olivia Rodrigo is. Nor do I want to fall prey to a voracious cougar or to be regaled with stories of the fight for multiparty democracy. Let’s say I am looking for a woman between 23 and 29 years of age.

  • fU = The fraction of age-appropriate women in Nairobi with a university education. (0.01)

I am not trying to be an elitist or anything, but I would like my girlfriend to have a university education. I think we would have more in common and I would like someone I could discuss my work with sometimes. I know that there are many intelligent people who don’t go to university, so don’t get all righteously indignant. Everyone has preferences. How many women out there have dated men shorter than themselves? I rest my case.

  • fB = The fraction of university-educated, age-appropriate women in Nairobi who I find physically attractive. (0.05)

Physical attractiveness is important. It is often the first thing people notice about each other and it makes sex easier. Not that my potential girlfriend need be considered attractive by anyone else, but I must find her attractive. This is a tough parameter to estimate. Let’s be generous and say I find 1 in 20, or 5% of age-appropriate women in Nairobi with a university education physically attractive.

  • L = The length of time in years that I have been alive thus making an encounter with a potential girlfriend possible.

27? Good lord, I am old.

We can simplify the above specification by recognizing that the number of people who have ever lived in Kenya is related to the population growth rate by:

N = ∫0T R(t).dt

where T is the age of Kenya. If we assume that R is constant over the period T then N = R ⋅ T. While this simplification is often used for the Drake Equation’s intended purpose, it is not a good assumption when adapting the equation for our purposes here. Instead, we use N*, the population of Kenya as of mid-2023, where:

N* = 55,100,586

With this simplification, we can re-specify the Drake equation as:

G = N* ⋅ fW ⋅ fL ⋅ fA ⋅ fU ⋅ fB

If we plug in the above values we get:

G = 55,100,586 ⋅ 0.51⋅ 0.09 ⋅ 0.19 ⋅ 0.01 ⋅ 0.05

or:

G = 240

So, what this means is that 240 people in Kenya satisfy these most basic criteria for being my girlfriend. That is 0.00044% of Kenyans and 0.0047% of Nairobians, which does not seem so good. On a given night in Nairobi, there is greater than a 1 in 10,000 chance that I will meet an attractive woman between the ages of 23 and 29 with a university degree. Of course, this does not take into account the fraction of these women who will find me attractive (depressingly low), the fraction of these women who will be single (falling with age) and, perhaps most importantly, the fraction of these women who I will get along with. Including such factors would greatly reduce the above figure of 240. A rough estimate puts the number of potential girlfriends accounting for these three additional criteria (1 in 10 of the women find me attractive, half are single and I get along with 1 in 10) at 2. That’s correct. There are only 2 women in Kenya with whom I might have a wonderful relationship with. So, on a given night out in Nairobi, there is a 0.000045% chance of meeting one of these special people, about 1000 times better than finding an alien civilization we can communicate with. That’s a 1 in 27,550,293 chance. Not great. At all!

Make of this what you will. It might cheer you up, it might depress you. I guess it depends on what you thought your chances were before reading this. But how do you think I feel? I spent aeons perfecting this formula, only to prove that I am the highest note in a sad song.

For us in the tropics, the lack of a chiller wasn’t much of a problem. Smoking, drying, salting, fermentation, or a combination of these methods were applied.

But for our siblings in the north, it used to be hard to keep food from spoiling in the summertime. There was a person called “the iceman”. He would go to their house and sell them a big block of ice. They’d keep it in something called an “icebox” to preserve the kinds of food that spoiled quickly. But that was a drag because the ice kept melting. It would drip all over the floor.

So, somebody thought up another way to keep food cold. It was a gas-powered system that used ammonia or sulphur dioxide as a coolant. No more lugging blocks of ice. What could be bad about that? Well, the chemicals were not only poisonous, they smelled terrible and there were leaks.

A substitute coolant was badly needed. One that would circulate inside the refrigerator, but would not poison anyone if the refrigerator leaked, or pose a danger if it was sent to the junkyard. Something that wouldn’t make you sick, wouldn’t burn your eyes, or attract bugs, or even bother the cat. But in all of nature, no such material seemed to exist.

So, chemists invented a class of molecules. Little collections of even tinier things called atoms, that had never existed on Earth before. They called them chlorofluorocarbons, or CFCs, because they were made up of one or more carbon atoms and some chlorine and/or fluorine atoms. These new molecules were wildly successful, far exceeding the expectations of their inventors. Not only did CFCs become the chief coolant in refrigerators, but also in air conditioners. There were so many things you could do with CFCs:

  • People used them to propel great fluffy mounds of shaving cream.
  • And to protect your hair from wind and rain.
  • It was also the propellant that made fire extinguishers and spray paint cans so much fun.
  • It was good for foam insulation, industrial solvents and cleansing agents.

The most famous brand name of these chemicals was Freon, a trademark of DuPont. It was used for decades and no harm ever seemed to come from it. Safe as safe could be, everyone figured. Until, in the early 1970s, two atmospheric chemists at the University of California, Irvine were studying Earth’s atmosphere.

Mario Molina (right) was a Mexican immigrant and a young laser chemist. Sherwood Rowland was a chemical kineticist, someone who studied the motions of molecules and gases under varying conditions.

Molina wanted to grow as a scientist. He was looking for a project that would take him as far from his previous research experience as possible. He wondered. What happens to those Freon molecules when they leak out of the air conditioner? This was a time when the Apollo astronauts were still making regularly scheduled trips to the Moon. And NASA was contemplating weekly launches of a space shuttle. Would all that burning rocket fuel pose a danger to the stratosphere, that place where Earth’s atmosphere meets the blackness of space? And this is how science works a lot of the time. You set out to solve one problem, and you happen on a completely different, unexpected phenomenon.

Those wonderfully inert, “harmless” CFCs, the magic molecules of shaving cream and hair spray, didn’t simply vanish when we were done with them. They had an afterlife at the edge of space, where they accumulated in the trillions. They were silently congregating high above the Earth, and they were up to no good. Molina and Rowland were alarmed to discover that the CFCs had thinned the protective layer that shielded us from the Sun’s harmful ultraviolet radiation. And it was getting worse all the time. When UV light hits a CFC molecule, it strips away the chlorine atoms. Once that happens, the chlorine atoms start devouring the precious ozone molecules.

A single chlorine atom can destroy 100,000 ozone molecules.

It wasn’t until our planet developed an ozone layer, about two and a half billion years ago, that it was safe for life to leave the ocean for the land.

CFCs were in everything, and the manufacturers couldn’t imagine a world without them. The corporate response to this danger was that the science hadn’t been settled. People had a hard time believing that we had become powerful enough as a species to endanger life on the planet. They looked for non-human causes for the loss of the ozone in the sky. One Reagan administration official suggested that everyone just wear more sunblock and put on a hat and sunglasses. But the scientists pointed out that the plankton, those tiny plants at the base of the global food chain, and the larger plants, were unlikely to do so.

Molina and Rowland tirelessly worked to warn the world.

What’s the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?

Sherwood Rowland

But then something amazing happened. There was a global outcry. People all over the world got involved. In the 1960s, the women of the world demanded an end to atmospheric nuclear testing because they didn’t want to nurse their babies with poisoned milk. Then, in the ’80s, consumers demanded that the corporations stop manufacturing CFCs. And you know what? The governments listened. The Montreal Protocol – the international treaty designed to protect the ozone layer by phasing out the production of numerous substances responsible for ozone depletion – was signed 36 years ago today. CFCs were banned in 197 countries. That’s just about as many countries as there are on this planet. The ozone layer has been getting thicker ever since.

But what would’ve happened if Rowland and Molina hadn’t been curious about the stratosphere, or if their warnings had been ignored? By 2060, the ozone would have been all but gone from the entire planet. You would never have been able to take your children out to bask in the sunshine. The food crops would have completely failed. The herbivores, those who live off them, would have died out. The carnivores would subsist on their corpses for a while, but ultimately, they, too, would be doomed.

If we continue to safeguard the ozone layer, it will be completely mended by 2050. And that’s why this is one danger you can cross off your worry list.

And so, as we reflect on the remarkable journey of discovery, responsibility, and global cooperation sparked by the lessons of the Montreal Protocol, we find ourselves standing at the crossroads of another monumental challenge — the looming threat of global warming and climate change.

Embers of Heritage in the Digital Age

Tales formed a crucial part of my childhood. Growing up, I used to love going upcountry to visit my grandparents all cause of the interesting stories they had about their many years of existence on this speck of dust we call home. This is only one instance of how stories and legends shared over the flames of bonfires have always linked generations. But, in the age of screens and clicks, that connection is fraying. Don’t worry though: within the world of technology exists a modern-day tapestry to preserve our stories, ensuring they echo even on the far horizons of the Moon and Mars – someday when we settle on these cosmic neighbours.

Tales used to be shared over bonefires

Weaving Tales into the Cosmic Fabric

Stay with me… let’s consider a world in which the substance of our ancestral stories does not fade but rather evolves. Consider a virtual bonfire, a meeting place for the global (interplanetary) village to share stories not only across generations but throughout the cosmos. Picture a digital loom where history’s threads intertwine, weaving a tapestry for posterity’s gaze.

This is not a new concept to us. We started thinking about sharing our stories throughout the cosmos as early as 1977 when Voyager 1 and Voyager 2 were launched from the NASA Kennedy Space Center at Cape Canaveral, Florida aboard the Titan-Centaur expendable rockets.

These two space crafts that are now approximately 24 billion kilometres from Earth (Voyager 1) and 20 billion kilometres (Voyager 2) carry within them The Golden Record: a phonographic record that contains a curated selection of sounds and images that represent the diversity of life and culture on Earth.

The records are a time capsule, intended to communicate a story of our world to other civilizations that may exist out there.

Chronicles in the Cloud: More Than Just Data

Voyager chronicles aside, let’s get technical, but not too technical. Consider your narrative to be a file. It might have been passed down in your generation or one that you have collected as a hobby. Instead of this file being tucked away, it’s floating in a digital cloud, where it can scatter stories like cosmic confetti. And it’s not just words; there are images, sounds, and animations as well, making our stories the life of this virtual party.

Cultural Constellations: A Galaxy of Narratives

Every culture is represented by a star in the narrative galaxy. These stars are shining even brighter as a result of technological advancements. Imagine exchanging bedtime stories from all around the world with your new Moon friends, creating a constellation of stories that would make even the Milky Way jealous! Do you want to know what the best part is? You’d be able to interact with tales from different cultures around the world. Whether it’s the Māori or the Native Americans or even Aborigines and Early man.

AR & VR Voyages: From Campfires to Cosmic Camps

Now with all these stories available at the click of a button, put on your AR/VR goggles and enter a world where campfires are more than simply logs—they’re not even real! But what about the stories? They’re just as real as they’ve always been. Even if the fire isn’t hot enough for roasting marshmallows, you’re right there, feeling the crackling energy of stories.

10 points for Gryffindor if you can be able to talk to someone in New Zealand while you are in your sustainable city apartment in Nairobi, and ask them what emotions they are feeling at that very moment. A virtual world, given a personal touch.

Passing the Digital Torch: Where Bytes Become Legends

We need to keep our tales alive like a torch in a relay race. However, in this cosmic race, we’re not simply passing a torch, but also a legacy. Technology is more than simply a gadget; it is the link between generations. And, like a phoenix emerging from the ashes, our stories rise from the old to embrace the new, illuminating the path for future generations.

Buckle up, fellow Voyager, for we’re travelling to new galaxies with technology as our spaceship and stories as our fuel. It’s not only about storytelling; it’s about preserving our humanity in the vastness of the universe.

“Alfred, it’s spinning.” Roy Kerr, a New Zealand-born physicist in his late 20s, had, for half an hour, been chain-smoking his way through some fiendish mathematics. Alfred Schild, his boss at the newly built Centre for Relativity at the University of Texas, had sat and watched. Now, having broken the silence, Kerr put down his pencil. He had been searching for a new solution to Albert Einstein’s equations of general relativity, and at last, he could see in his numbers and symbols a precise description of how space-time—the four-dimensional universal fabric those equations describe—could be wrapped into a spinning ball. He had found what he was looking for.

When this happened, in 1962, the general theory of relativity had been around for almost half a century. It was customarily held up as one of the highest intellectual achievements of humanity. And it was also something of an intellectual backwater. It was mathematically taxing and mostly applied to simple models with little resemblance to the real world, and thus not widely worked on. Kerr’s spinning solution changed that. Given that pretty much everything in the universe is part of a system that spins at some rate or other, the new solution had relevance to real-world possibilities—or, rather, out-of-this-world ones—that previous work in the field had lacked. It provided science with a theoretical basis for understanding a bizarre object that would soon bewitch the public imagination: the black hole.

General relativity was presented to the Prussian Academy of Sciences over the course of four lectures in November 1915; it was published on December 2nd that year. The theory explained, to begin with, remarkably little, and unlike quantum theory, the only comparable revolution in 20th-century physics, it offered no insights into the issues that physicists of the time cared about most. Yet it was quickly and widely accepted, not least thanks to the sheer beauty of its mathematical expression; a hundred years on, no discussion of the role of aesthetics in scientific theory seems complete without its inclusion.

When gravity fails

Today its appeal goes beyond its elegance. It provides a theoretical underpinning to the wonders of modern cosmology, from black holes to the Big Bang itself. Its equations have recently turned out to be useful in describing the physics of earthly stuff too. And it may still have secrets to give up: enormous experiments are underway to see how the theory holds in the most extreme physical environments that the universe has to offer.

The theory was built on the insights of Einstein’s first theory of relativity, the “special theory”, one of a trio of breakthroughs that made his reputation in 1905. That theory dramatically abandoned the time-honoured description of the world in terms of absolute space and time in favour of a four-dimensional space-time (three spatial dimensions, one temporal one). In this new space-time, observers moving at different speeds got different answers when measuring lengths and durations; for example, a clock moving quickly with respect to a stationary observer would tell the time more slowly than one sitting still. The only thing that remained fixed was the speed of light, c, which all observers had to agree on (and which also got a starring role in the signature equation with which the theory related matter to energy, E=mc2).

Special relativity applied only to special cases: those of observers moving at constant speeds in a straight line. Einstein knew that a general theory would need to deal with accelerations. It would also have to be reconciled with Isaac Newton’s theory of gravity, which relied on absolute space, made no explicit mention of time at all, and was believed to act not at the speed of light but instantaneously.

Einstein developed all his ideas about relativity with “thought experiments”: careful imaginary assessments of highly stylised states of affairs. In 1907 one of these provided him with what he would later refer to as his “happiest thought”: that someone falling off a roof would not feel his own weight. Objects in free-fall, he realised, do not experience gravity. But the curved trajectories produced by gravity—be they the courses of golf balls or planets—seemed to imply some sort of pushing or pulling. If golf balls and planets, like people falling off roofs, felt no sort of push or pull, why then did they not fall in straight lines?

The central brilliance of general relativity lay in Einstein’s subsequent assertion that they did. Objects falling free, like rays of light, follow straight lines through space-time. But that space-time itself is curved. And the thing that made it curve was mass. Gravity is not a force; it is a distortion of space-time. As John Wheeler, a physicist given to pithy dictums about tricky physics, put it decades later: “Space-time tells matter how to move; matter tells space-time how to curve.”

The problem was that, in order to build a theory on this insight, Einstein needed to be able to create those descriptions in warped four-dimensional space-time. The Euclidean geometry used by Newton and everyone else was not up to this job; fundamentally different and much more challenging mathematics was required. Max Planck, the physicist who set off the revolution in quantum mechanics, thought this presented Einstein with an insurmountable problem. “I must advise you against it,” he wrote to Einstein in 1913, “for in the first place you will not succeed, and even if you succeed no one will believe you.”

Handily for Einstein, though, an old university chum, Marcel Grossmann, was an expert in Riemannian geometry, a piece of previously pure mathematics created to describe curved multi-dimensional surfaces. By the time of his lectures in 1915 Einstein had, by making use of this unorthodox geometry, boiled his grand idea down to the elegant but taxing equations through which it would become known.

Just before the fourth lecture was to be delivered on November 25th, he realised he might have a bit more to offer than thought experiments and equations. Astronomers had long known that the point in Mercury’s orbit closest to the sun changed over time in a way Newton’s gravity could not explain. In the 1840s oddities in the orbit of Uranus had been explained in terms of the gravity of a more distant planet; the subsequent discovery of that planet, Neptune, had been hailed as a great confirmation of Newton’s law. Attempts to explain Mercury’s misbehaviour in terms of an undiscovered planet, though, had come to nought.

Famous long ago

Einstein found that the curvature of space-time near the sun explained Mercury’s behaviour very nicely. At the time of the lectures, it was the only thing he could point to that general relativity explained and previous science did not. Martin Rees, Britain’s Astronomer Royal, is one of those who sees the nugatory role played by evidence in the development of the theory as one of the things “that makes Einstein seem even more remarkable: he wasn’t motivated by any mysterious phenomena he couldn’t explain.” He depended simply on his insight into what sort of thing gravity must be and the beauty of the mathematics required to describe it.

After the theory was published, Einstein started to look for ways to test it through observation. One of them was to compare the apparent positions of stars that were in the same part of the sky as the sun during a solar eclipse with their apparent positions at other times. Rays of light, like free-falling objects, trace straight lines in space-time. Because the sun’s mass warps that space-time, the positions of the stars would seem to change when the rays skirted the sun (see diagram).

In 1919 Arthur Eddington, a famed British astronomer, announced that observations of an eclipse made on the West African island of Principe showed just the distortion Einstein had predicted (one of his images is pictured). “LIGHTS ALL ASKEW IN THE HEAVENS”, read the New York Times headline, adding helpfully that “Nobody Need Worry”. Einstein, while pleased, had faith enough in his idea not to have been on tenterhooks. When asked what he would have done had Eddington found a different result, he replied, “Then I would feel sorry for the good Lord. The theory is correct.”

As far as the rest of the world was concerned, Eddington’s result put general relativity more or less beyond doubt. But that did not make it mainstream. For one thing, it was hard to grasp. At a public event, Eddington was momentarily stumped by the suggestion that he “must be one of the three persons in the world who understand general relativity”. When the silence was taken for modesty, he replied “On the contrary, I am trying to think who the third person is!”

General relativity also seemed somewhat beside the point. The quantum revolution that Planck had begun, and that Einstein had contributed to in one of his other great papers of 1905, was bearing fascinating fruit. Together with a blossoming understanding of the atomic nucleus, it was at the centre of physicists’ attention. Special relativity had a role in the excitement; its most famous expression, E=mc2, gave a measure of the energy stored in those fascinating nuclei. General relativity had none.

What it offered instead was a way to ask questions not about what was in the universe, but about the structure of the universe as a whole. There were solutions to the equations in which the universe was expanding; there were others in which it was contracting. This became a topic of impassioned debate between Einstein and Willem de Sitter, a Dutch physicist who had found one of the expanding-universe solutions. Einstein wanted a static universe. In 1917 he added to his equations a “cosmological constant” which could be used to fix the universe at a given size.

That became an embarrassment when, in 1929, an American astronomer put forward strong evidence that the universe was, indeed, getting bigger. Edwin Hubble had measured the colour of the light from distant galaxies as a way of studying their motion; light from objects approaching the Earth looks bluer than it would otherwise, and light from objects receding looks redder. Hubble found that, on average, the more distant the galaxy, the more its light was shifted towards the red; things receded faster the farther away they were. The evidence for an expanding universe these redshifts provided led Einstein to reject the cosmological constant as the “greatest blunder of my life”.

The theory had other implications at which its architect initially baulked. In the 1930s nuclear physicists worked out that stars were powered by nuclear reactions, and that when those reactions ran out of fuel the stars would collapse. Something like the sun would collapse into a “white dwarf” about the size of the Earth. Bigger stars would collapse yet further into “neutron stars” as dense as an atomic nucleus and just 20 kilometres or so across. And the biggest stars would collapse into something with no length, breadth or depth but infinite density: a singularity.

Finding singularities in a theory is highly distasteful to the mathematically minded; they are normally signs of a mistake. Einstein did not want any of them in his universe, and in 1939 he published a paper attempting to show that the collapse of giant stars would be halted before a singularity could be formed. Robert Oppenheimer, a brilliant young physicist at Berkeley, used the same relativistic physics to contradict the great man and suggest that such extreme collapses were possible, warping space-time so much that they would create regions from which neither light nor anything else could ever escape: black holes.

Oppenheimer boi

Oppenheimer’s paper, though, was published on the day Germany invaded Poland, which rather put the debate on hold. Just a month before, Einstein had written to Franklin Roosevelt highlighting the military implications of E=mc2; it would be for realising those implications, rather than for black holes, that Oppenheimer would be remembered.

In part because of Oppenheimer’s government-bewitching success, new sorts of physical research flourished in the post-war years. One such field, radio astronomy, revealed cosmic dramas that observations using light had never hinted at. Among its discoveries were sources of radio waves that seemed at the same time small, spectacularly powerful and, judging by their redshifts, phenomenally distant. The astronomers dubbed them quasars and wondered what could possibly produce radio signals with the power of hundreds of billions of stars from a volume little bigger than a solar system.

Roy Kerr’s solution to the equations of general relativity provided the answer: a supermassive spinning black hole. Its rotation would create a region just outside the hole’s “event horizon”—the point of no return for light and everything else—in which matter falling inward would be spun up to enormous speeds. Some of that matter would be squirted out along the axis of rotation, forming the jets seen in radio observations of quasars.

Disappear like smoke

For the first time, general relativity was explaining new phenomena in the world. Bright young minds rushed into the field; wild ideas that had been speculated on in the fallow decades were buffed up and taken further. There was talk of “wormholes” in space-time that could connect seemingly distant parts of the universe. There were “closed time-like curves” that seemed as though they might make possible travel into the past. Less speculatively, but with more profound impact, Stephen Hawking, a physicist (pictured, with a quasar), and Roger Penrose, a mathematician, showed that relativistic descriptions of the singularities in black holes could be used to describe the Big Bang in which the expansion of the universe began—that they were, in fact, the only way to make sense of it. General relativity gave humans their first physical account of creation.

Hawking boi

Dr Hawking went on to bring elements of quantum theory into science’s understanding of the black hole. Quantum mechanics says that if you look at space on the tiniest of scales you will see a constant ferment in which pairs of particles pop into existence and then recombine into nothingness. Dr Hawking argued that when this happens at the event horizon of a black hole, some of the particles will be swallowed up, while some will escape. These escaping particles mean, in Dr Hawking’s words, that “black holes ain’t so black”—they give off what is now called “Hawking radiation”. The energy lost this way comes ultimately from the black hole itself, which gives up mass in the process. Thus, it seems, a black hole must eventually evaporate away to nothingness.

Adding quantum mechanics to the description of black holes was a step towards what has become perhaps the greatest challenge in theoretical physics: reconciling the theory used to describe all the fields and particles within the universe with the one that explains its overall shape. The two theories view reality in very different ways. In quantum theory, everything is, at some scale, bitty. The equations of relativity are fundamentally smooth. Quantum mechanics deals exclusively in probabilities—not because of a lack of information, but because that is the way the world actually is. In relativity all is certain. And quantum mechanics is “non-local”; an object’s behaviour in one place can be “entangled” with that of an object kilometres or light-years away. Relativity is proudly local; Einstein was sure that the “spooky action at a distance” implied by quantum mechanics would disappear when a better understanding was reached.

It hasn’t. Experiment after experiment confirms the non-local nature of the physical world. Quantum theory has been stunningly successful in other ways, too. Quantum theories give richly interlinked accounts of electromagnetism and of the strong and weak nuclear forces—the processes that hold most atoms together and split some apart. This unified “standard model” now covers all observable forms of matter and all their interactions—except those due to gravity.

Some people might be satisfied just to let each theory be used for what it is good for and to worry no further. But people like that do not become theoretical physicists. Nor will they ever explain the intricacies of the Big Bang—a crucible to which grandiose theory-unifiers are ceaselessly drawn. In the very early universe, space-time itself seems to have been subject to the sort of fluctuations fundamental to the quantum world (like those responsible for Hawking radiation). Getting to the heart of such shenanigans requires a theory that combines the two approaches.

There have been many rich and subtle attempts at this. Dr Penrose has spent decades elaborating an elegant way of looking at all fields and particles as new mathematical entities called “twistors”. Others have pursued a way of adding quantum bittiness to the fabric of space-time under the rubric of “loop quantum gravity”. Then there is the “Exceptionally Simple Theory of Everything”—which isn’t. As Steven Weinberg, one of the unifiers whose work built the standard model, puts it, “There are so many theories and so few observations that we’re not getting very far.”

Dr Weinberg, like many of his colleagues, fancies an approach called superstring theory. It is an outgrowth of the standard model with various added features that seem as though they would help in the understanding of space-time and which its proponents find mathematically beguiling. Ed Witten of the Institute for Advanced Study (IAS) in Princeton, Einstein’s institutional home for the last 22 years of his life, is one of those who has raised it to its current favoured status. But he warns that much of the theory remains to be discovered and that no one knows how much. “We only understand bits and pieces—but the bits and pieces are staggeringly beautiful.”

This piecemeal progress, as Dr Witten tells it, offers a nice counterpoint to the process which led up to November 1915. “Einstein had the conception behind general relativity before he had the theory. That’s in part why it has stood: it was complete when it was formulated,” he says. “String theory is the opposite, with many manifestations discovered by happy accident decades ago.”

Entangled up in the blue

And the happy accidents continue. In 1997 Juan Maldacena, an Argentine theoretician who now also works at the IAS, showed that there is a deep connection between formulations of quantum mechanics known as conformal field theories and solutions to the Einstein equations called anti-de Sitter spaces (similar to the expanding-universe solution derived by Willem de Sitter, but static and much favoured by string theorists). Neither provides an account of the real world, but the connection between them lets physicists recast intractable problems in quantum mechanics into the sort of equations found in general relativity, making them easier to crack.

This approach is being gainfully employed in solving problems in materials science, superconductivity and quantum computing. It is also “influencing the field in a totally unexpected way,” says Leonard Susskind, of Stanford University. “It’s a shift in our tools and our methodology and our way of thinking about how phenomena are connected.” One possibility Dr Maldacena and Dr Susskind have developed by looking at things this way is that the “wormholes” relativity allows (which can be found in the anti-de Sitter space) may be the same thing as the entanglement between distant particles in quantum mechanics (which is part of the conformal field theory). The irony of Einstein’s spooky quantum bête noire playing such a crucial role has not gone unremarked.

There is more to the future of relativity, though than its eventual subsumption into some still unforeseeable follow-up theory. As well as offering new ways of understanding the universe, it is also providing new ways of observing it.

This is helpful because there are bits of the universe that are hard to observe in other ways. Much of the universe consists of “dark matter” which emits no radiation. But it has mass, and so it warps space, distorting the picture of more distant objects just as the eclipse-darkened sun distorted the positions of Eddington’s stars. Studying distortions created by such “gravitational lenses”—both luminous (pictured, with Einstein) and dark—allows astronomers with the precise images of the deep sky today’s best telescopes provide to measure the distribution of mass around the universe in a new way.

A century ago Albert Einstein changed the way humans saw the universe. His work is still offering new insights today.
Einstein boi

Another form of relativity-assisted astronomy uses gravitation directly. Einstein’s equations predict that when masses accelerate around each other they will create ripples in space-time: gravitational waves. As with black holes and the expanding universe, Einstein was not keen on this idea. Again, later work has shown it to be true. A pair of neutron stars discovered spinning around each other in the 1970s are exactly the sort of system that should produce such waves. Because producing gravitational waves requires energy, it was realised that these neutron stars should be losing some. And so they proved to be—at exactly the rate that relativity predicts. This indirect but convincing discovery garnered a Nobel prize in 1993.

As yet, though, no one has seen a wave in action by catching the expansion and contraction of space that should be seen as one goes by, because the effects involved are ludicrously small. But researchers at America’s recently upgraded Laser Interferometer Gravitational-wave Observatory (LIGO) now think they can do it. At LIGO’s two facilities, one in Louisiana and one in Washington state, laser beams bounce up and down 4km-long tubes dozens of times before being combined in a detector to make a pattern. A passing gravitational wave that squashes space-time by a tiny fraction of the radius of an atomic nucleus in one arm but not the other will make a discernible change to that pattern. Comparing measurements at the two sites could give a sense of the wave’s direction.

Step into the light

The aim is not just to detect gravitational waves—though that would be a spectacular achievement—but to learn about the processes that produce them, such as mergers of neutron stars and black holes. The strengths of the warping effects in such cataclysms are unlike anything seen to date; their observation would provide a whole new type of test for the theory.

And history suggests there should be completely unanticipated discoveries, too. Kip Thorne, a specialist in relativity at the California Institute of Technology and co-founder of LIGO, says that “every time we’ve opened a new window on the cosmos with new radiation, there have been unexpected surprises”. For example, the pioneers of radio astronomy had no inkling that they would discover a universe full of quasars—and thus black holes. A future global array of gravitational-wave observatories could open a whole new branch of observational astronomy.

A century ago general relativity answered no one’s questions except its creator’s. Many theories are hit upon by two or more people at almost the same time; but if Einstein had not devoted years to it, the curvature of space-time which is the essence of gravity might not have been discovered for decades. Now it has changed the way astronomers think about the universe, has challenged them to try and build theories to explain its origin, and even offered them new ways to inspect its contents. And still, it retains what most commended it to Einstein: its singular beauty revealed first to his eyes alone but appreciated today by all who have followed. “The Einstein equations of general relativity are his best epitaph and memorial,” Stephen Hawking has written. “They should last as long as the universe.”

Should we feel excited or frightened by the idea of an AI model directing a robot?

I am a data science student and more than once I’ve wondered if I’m doing all this studying for nought, because GPT, the AI, can do what I do; well except stand in front of my bosses and do a presentation, until they can hire some robots. But for now, I remain relevant.

“If the AI can replace my work, then I don’t think I’m not doing a good job,” uttered one Alex Konrad, Forbes Senior Editor in an interview. Safe to say I live by these words now.

It is the age of AI. Albeit, AI has been there for a while, the likes of Siri, Cortana, Google Assistant, and Alexa, but not as much as what this year has brought us. We have OpenAI’s ChatGPT, Midjourney, Google’s Bard, Jasper, Stability, Bing AI, etc. Image generation from a prompt is incredibly brilliant, don’t you think? Coding capabilities, are also very impressive (but does it work? Sometimes, yeah).

Artificial Intelligence has many a definition, the crux of it being, “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. How does the computer learn? Through Machine Learning – it detects the patterns from training data and predicts and performs tasks without being manually or explicitly programmed. We have one more, Deep learning (done via neural networks) – a method in AI that teaches computers to process data in a way that is inspired by the human brain.

A frequently asked and debated question is, “Does the AI know what it’s doing? “.

ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched on November 30, 2022. In full it is Chat Generative Pre-Training Transformer. When you feed it a prompt, it gives a very coherent output, mostly true, might be false, might be completely made up. Why is this? Why does a supposedly intelligent machine lie? Does it know it’s lying? Most of the ‘experts’ (or insert equally semantic word here) say it doesn’t. This is because the output is merely a prediction of text based on the prompt you have given. ChatGPT was trained on a massive corpus of text data, around 570GB of datasets, including web pages, books, and other sources. It was born after running trillions of words for the equivalent of 300 years, through supercomputers processing in parallel for months. After all this, the computer made about 170 billion connections between all these words. Incomprehensible, isn’t it? Math is beautiful. So anytime you enter a prompt, ChatGPT calculates through all these connections to give you the most appropriate prediction of words that had the highest probability after all the back-end math was done. (Look into neural networks, it’s interesting to watch a machine be taught how to recognize handwritten numbers.) So yes, it is very possible that ChatGPT doesn’t know what you’re asking it, or what it’s replying to. It’s all math and predictions. However, this will change as we keep teaching AI to not just predict data but to understand and learn any intellectual task.

Is there a law to protect us against AI?

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analyzed and classified according to the risks they pose to users. The different risk levels will mean more or less regulation. The first ever AI Act was created on the 8th of June, 2023 by the European Commission for the use of artificial intelligence in the European Union as part of its digital strategy. It contains different rules for different risk levels and Generative AI transparency requirements. This is a good start because there are several concerns about the data used to train the AI. For example, in February, Getty Images sued Stability AI, a smaller AI start-up, alleging it illegally used its photos to train its image-generating bot. ChatGPT maker OpenAI is facing a class action over how it used people’s data. We have people creating content via AI and passing it off as theirs, and this information might be false. There are many concerns over who is responsible if a human was to use AI for ‘wrong’.

Is AI sentient?

Can AI perceive or feel things? Current applications of AI, like language models (i.e., GPT and Google’s LaMDA), are not sentient. They are only trained to sound like they know what they are talking about – ‘they’ being AI collectively. Will we know if it ever became sentient? There is no consensus on accurately determining if an AI is conscious, given our current understanding of consciousness. Scary sounding, isn’t it? That’s because it is, but it is also very exciting.

The AI revolution is here, and a lot is about to change. Are we ready?