This edition of The AEC Matrix is a bit of a confession, and a bit of a realization on my part about why I'm really writing this newsletter in the first place. It's because I see AI as both a huge opportunity and an existential threat. I'm scared.
I'm going to try to explain why.
A Transformational Period
Human history has been full of fascinating eras to be alive. I don't know where the last few decades rank in the grand scheme of things, but I'd be willing to bet that historians will look back on this period as uniquely transformational. I don't think we (or the machines) will know how unique or how important until it's well back in the rearview mirror.
I was born before the internet. I clearly remember a time when most people did not have personal computers in their homes. The personal computers that existed were clunky, expensive, and didn't really do much.
As a kid, things like Netscape Navigator and AOL were revolutionary. Portable cell phones (once they were invented) came in pocketbook-sized satchels, not thin slabs of glass.
Now as an adult I run a business in an era where everyone carries a supercomputer in their pocket and the same software can write both English essays and computer code from plain language prompts. I can have a live face-to-face conversation with anyone on earth at any moment of the day or night for free. I have a personalized language tutor ready anytime I am—in any language—also in my pocket, also for free.
The pace of this technological change has been brisk, but generally slow enough that it's all felt incremental—and comfortable. One thing logically led to the next and while I have been frequently amazed, I've never felt intimidated or scared.
Until recently.
The recent developments in commercially-available artificial intelligence (AI) tools freak me out. There's no other way to put it.
What's different?
Intelligence is at the core of what makes humans unique on earth. We don't run the fastest. We are not the strongest. We have pitiful claws and teeth and are mediocre swimmers.
But we're wicked smart.
Human intelligence has allowed us to accomplish innumerable feats that, to our ancestors, would be indistinguishable from magic.
Lots of other animals are pretty smart too. The octopus, for example, can do some really neat stuff (check out My Octopus Teacher if you haven't seen it). But octopi don't build giant metal tubes to fly themselves across continents, or transmit images of themselves through the air to other octopi in distant oceans.
Intelligence, and the innovation it facilitates, is the human superpower.
Which is why I'm more than a little concerned now that some of my fellow hairless monkeys seem to have come within spitting distance of creating an artificial intelligence that's smarter than we are.
Quick Break for Definitions and Background
There are a few terms from the the AI world that I want to share here, in case you're not familiar with them.
AGI refers to "artificial general intelligence".
ASI refers to "artificial super intelligence".
Alignment refers to the extent to which AI (including an AGI or ASI) is aligned with human interests.
Creating an AGI and ASI is a thing that folks are working on. People are actively trying to create an intelligence that supersedes human intelligence in every way right now.
People are also working on "the alignment problem", as it's called. Un-aligned AI is a well-recognized risk in AI circles.
As a matter of fact, people have been working on these issues for a while. Like decades.
This was news to me. Until recently I didn't know that something like AGI was a thing, nor that we needed to be worried about it being aligned with us.
Back to the Program
So while what we have today in the likes of ChatGPT and Bing are very interesting and certainly transformational on their own, they're only steps along the path.
Companies working on AGI/ASI are well funded, with billions of dollars at their disposal. The biggest tech giants in the world are in the game.
Some (all?) of them are already using their existing AIs to help build better AIs. The extent to which that process is supervised or understood by the human engineers is not entirely clear, though let's assume for now it's fully supervised and fully understood.
Will that always be the case? Even if that's the intent, what are the chances no one screws up, ever? And if someone does screw up, just once, with a sufficiently-powerful AI... then what?
The point is there are smart people, people who know way more about AI than I do, who are very concerned about what happens as these AIs start to get more powerful. The CEO of OpenAI (creator of ChatGPT) just wrote a big long blog postabout this exact topic.
Staying ahead of (or at least on top of) this tidal wave by learning as much as possible and actively trying to be a practitioner with the tools seems to me to be the only reasonable course of action.
Maybe it won't work?
Of course there's a chance that we never develop AGI or ASI. Maybe it's impossible or maybe it's hundreds of years off. Perhaps.
But I'd encourage you to consider other human achievements that many considered impossible before they were real.
Another way to look at it is this: through history, how many problems have humans attempted to tackle but not solved, given enough time? The list is short.
The bull case is scary enough.
Ok, taking a step back for a minute, let's assume that AI development proceeds sort of how technology development has gone over the last few decades. New features come online, new capabilities periodically emerge. But there's no AGI or ASI for the foreseeable future.
Given what ChatGPT is capable of today, I can imagine even that base case scenario being incredibly disruptive for knowledge work based businesses (like mine).
I've written before that the underlying knowledge—the basic information—that firms like mine use in our daily work is somewhat of a commodity these days. It's all over the internet.
One of the main services we provide is synthesizing that information in a way that helps our clients solve their problems. I believe many knowledge work businesses operate in a similar fashion (e.g. lawyers, doctors, management consultants, etc. etc.)
Well, as it happens synthesizing vast amounts of information is exactly what the current crop of AI generalists (e.g. ChatGPT, Bing) are already pretty good at. Even with incremental improvement, it seems clear to me that it's a matter if when, not if, they become better than most human specialists in this skill.
Staying ahead of (or at least on top of) this tidal wave by learning as much as possible and actively trying to be a practitioner with the tools seems to me to be the only reasonable course of action.
I'd rather experience some turbulence near the bleeding edge here than be picking up the rear and wondering what happened when Samantha or Ava have started serving my clients better than I can.
But I'm honestly not sure it's possible to stay ahead of the tidal wave forever.
Time will tell.
I'm still optimistic.
All that said, I have a lot of optimism about AI and all the ways it could help both my business and humanity as a whole.
I certainly hope that humans continue to be wicked smart and figure out how to work with AI as a force for good. Like I wrote earlier, we have a pretty good track record at solving big problems, even when those solutions are not obvious at the start.
Humanity is fascinating and whatever we come up with here will, if nothing else, just add to the beautiful tapestry of our history. As much as I can, I hope to be a curious observer as the weave emerges through this transformational period.
Couldn't agree with you more.