Our New Overlords?

Intelligence is easily the most profound qualifier for why we humans rule planet earth.

It’s also only the smallest percentage that makes all the difference.

Consider that we share about 90% of our DNA with blimmin’ mice for goodness sake. And yet the differences couldn’t be starker. In fact, we share fully 99% with chimpanzees, and even though we’ve all met “those people” who make us wish for a chimp, for the most part we humans trump chimps by most any metric.

Intelligence is also the most dangerous.

Consider what being another species on this planet looks like. Not great. We tend to think of danger only as it relates to us (humans). And THAT may be on the cusp of changing.

Let me explain:

Man, which is you and I, I’ll argue, due to supreme intelligence (certainly over chimps and our fellow species sharing this planet) has reconfigured the WHOLE WORLD to suit our purposes. But it’s only happened in the last little itty-bitty fraction of a fraction of time we’ve been hammering out a living on this ball of dirt.

If you were a Yoda on steroids, smart, and living for billions of years, and you were to have viewed this planet from the stars for the billions of years it’s been doing its thing orbiting the sun, you’d have nodded off from intense boredom. Not a helluva lot happened.

In fact, aside from the tectonic plates shifting about (which themselves took longer than women clothes shopping… way longer), you’d not have noticed much change.

And then… BAM!

In the last 0.001% of history you’d have seen radical change. For starters, as the planet rotates causing night and day, you’d have noticed that the bits experiencing darkness bloody glow. No really, they do.

Being Yoda-like and as calm as he usually would be, after literally billions of years of NOTHING, I’m pretty sure you’d be sitting bolt upright thinking, “Hey wait, what the hell?”

What you’d also have noticed is large swathes of land, completely reconfigured to suit the purposes of just one species on this planet. And it ain’t the fish.

Agricultural farmland in Imperial Valley, California

What you’d be witnessing was a parabolic growth in intelligence — an intelligence that decisively now reigns supreme on the planet, traversing geographies, ecosystems, and continents.

The flip side to this intelligence is, of course, arrogance.

Think of it like this…

We don’t hate monkeys, or mice, or most any of our fellow creatures on this planet. BUT we certainly don’t miss a beat when we crush their poor little skulls in a lab in order to better understand what might happen to us humans when incurring head trauma.

We don’t hate foxes but we don’t think too much about their little homes when we clear land to grow our sandwiches.

We don’t hate any of these creatures. In fact, we tend to find them cute and of interest at an individual level, but we sure as hell aren’t going to let THEIR discomfort get in the way of OUR progress.

Now, don’t get me wrong. I’m not one of the Marxist, anti-human crowd who believes we’re all a plague and should all stop having children, take up self-loathing, and pop cyanide for brekkie. I’m just pointing out some facts because they’re important for what comes next… and maybe, just maybe (yes, actually definitely) why we should not be so arrogant about the risks.

Enter Artificial Intelligence…

And no, I’m not talking about a blonde who’s died her hair brown. I’m talking real mind-blowing smarter-than-isht AI.

Even Alan Turing himself predicted that one day computers would have minds of their own. He predicted that in 1950, though, he figured it was 100 years away. He was right and he was wrong. They do have minds of their own and it took half the time.

The Telegraph reported that 2 years ago Google’s DeepMind AI program AlphaGo smoked world champion Lee Sedol at the highly complex board-game go:

Thousands of years of human knowledge has been learned and surpassed by the world’s smartest computer in just 40 days, a breakthrough hailed as one of the greatest  advances ever in artificial intelligence.

Google DeepMind amazed the world last year when its AI programme AlphaGo beat world champion Lee Sedol at Go, an ancient and complex game of strategy and intuition which many believed could never be cracked by a machine.

AlphaGo was so effective because it had been programmed with millions of moves of past masters, and could predict its own chances of winning, adjusting its game-plan accordingly.

What was unique about this — because we all remember how Gary Kasparov was defeated by IBM’s Deep Blue supercomputer back in the 90’s — was that go is an iterative game where the AI system was only taught the rules of the game but none of the possible moves. It had to learn them on the fly. In other words, it had to “think”:

According to Wired it took AlphaZero just four hours to become a chess champion, two hours for shogi, and eight hours to defeat the world’s greatest Go-playing computer program.

So this is significant because it means we’ve gone beyond “rule-based” AI to something else.

Now, I can hear you saying already, “But Chris, we’re going to have exponential growth the likes of which we’ve never seen and it’s going to be even more amazing than the advent of the internet, electricity, fire, and the wheel combined.”

And to that I say, “Well, yeah that’s partly my concern. Sure, I get it. I can envisage some neck snapping changes — for the good.”

And I know grown men who would rather pull their own heads off than go down this path. But go down it we are.

Think I’m kidding? Hong Kong’s metro, arguably the most sophisticated in the world, isn’t run by little Chinese people running around. It’s run by an AI:

The world’s most envied metro system in Hong Kong

…the algorithm that’s responsible for the incredible task of repairing and maintaining the system. That means assigning 10,000 employees to take care of more than 2,500 engineering jobs every week—an insanely complicated puzzle that would normally take a panel of humans two days of strategizing and planning. Instead, their bot can calculate the most efficient assignments and adapt to new information in seconds.

And yes, I can imagine how blockchain can integrate with AI and how DAOs (decentralized autonomous organizations) will function, and in fact, almost certainly rise to become THE top organisations in the world.

I can see how governance roles will be meted out by an AI. Hiring, firing, profiling, KPI’s — none of it done by humans. There will likely be incredible tools for management, decision making, agreement making — all things which can be automated iteratively and they’ll all function like code. I get all that.

But…

Consider what I was saying about foxes, lab rats, and the like. Consider this when thinking about a super AI which is the exponentialised version of what began merely as a niche AI project. But now this super AI iterates, betters, corrects, adjusts, quantifies, and if necessary, replicates based on millions, nay trillions, heck even exabytes (or whatever comes next) of data.

Will such an AI, or its cousins, sisters, brothers, half brothers, and all its interbred family “back off” from crushing a fox’s human’s skull in order to “advance”?

Like anything invented by man, it can be used for good or evil.

Derivatives can assist farmers to hedge against crop failure but they can also be used to blow up investors’ money and destabilise financial markets. Water sustains life but can also drown us. AI can be used to help solve some of the most complex problems mankind has — from curing Alzheimers to ridding the globe of plastic and everything in between.

But the thing with manmade things in the past is that they’ve always been up to man to decide how to use or abuse them. That looks like it’s about to change.

Who to Trust?

In a World Economic Forum interview with Sergey Brin of Google, Brin remarked that, despite being a few paper clips away from Google Brain’s Jeff Dean, he did not see deep learning coming.

Now, if Brin himself who created Google couldn’t have predicted the advance of deep learning, then what does that tell us about what we know, what we think we know, and what comes next?

Now, you may be sitting there saying to yourself, “Heck, Chris you know nothing about what you’re writing about. After all, you’re a money manager, not a data scientist or programmer.”

And I’ll laugh and say, “Of course, yes. You’re dead right.”

But I’ll whisper a little truth, and I say this after having read countless hours of academic research papers on this very topic:

“Nobody knows.”

– Chris

“Prediction is very difficult, especially if it’s about the future.” — Nils Bohr, Nobel laureate in physics

CapEx-Logo-Our-World-This-Week

Leave a Reply