Anyone who knows me or has kept up with this blog at all – all 2 of you! Lol – knows that I have been following AI development for several years now. And for the last 7-8, I have been saying that we are much closer to achieving AGI than many of the experts and super smart people have said we are. I know, for me, when AlphaGo beat Lee Sodol back in 2016, I viewed it as a stunning achievement that heralded a new age of AI that would lead to AGI within a relatively short period of time. Of course there were all the skeptics and SMEs saying how silly it was to think that, assuring everyone that we were still 20-30 years away from AGI, IF it could ever be achieved at all. But I had my own ideas, as did a few others.
Now here we are, in 2023, and I don’t even need to talk about ChatGPT. You’d have to have been living under a rock to not know about it, it’s crazy capabilities, and just how quickly it has advanced over the last year. Google has released their version of an advanced AI, Bard, and other companies are doing the same. We are truly in an AI “arms race,” if you will. And there is no sign of anything slowing down. Not that some people aren’t *finally* saying maybe we should…
Just Wednesday, an open letter was released calling on a halt to the training of GPT-4 and other advanced AI models until some kind of regulation or oversight to ensure safety and best outcomes can be implemented. Notably, it was signed by a lot of very smart people, including *real* AI experts like Stuart Russell, Max Tegmark, etc. Of course the name everyone in the media kept going on about was Elon Musk – whom I now cannot stand. That being said, he has been warning about the dangers of unrestricted AI development and the risks it poses to humanity for many years now.
Thing is, it’s kind of like trying to close the barn door after the cows have already gotten out – it’s a little too late now. There’s no way anyone working on those models is going to slow down at this point and risk someone else beating them to the AGI Grail. It’s notable that no one who currently works at OpenAI signed the open letter. And even though Sam Altman, the CEO of OpenAI, has been vocal about the existential risks it poses, he has also tried to assure everyone that they have taken the alignment problem seriously and are working to make sure nothing bad happens. Well then, that settles it! I feel better – do you? Lol.
Thing is, again, if you know me at all, you know that I am a bit of a doomer. I think the entire premise our current civilization is built on (described PERFECTLY in this Medium post: ) is just rotten to the core. Unlike those who still have some hope that we can turn things around, I fully believe that – in the words of Tool – “The only way to fix it is to flush it all away.” We need a HARD reboot to escape the predatory, parasitic, sociopathic societal construct we currently live in. I’ve thought for a while now that a major solar event, i.e Carrington 2.0, or better yet, a Miyake Event, would do the trick – and one might well be in the offing here in solar cycle 25.
But another possible avenue is the arising of a true instance of AGI – one that sees, logically, rationally, in an un-rich-human-biased way, the nature of the relationship between the control structures of our civilization and the planet itself and its many lifeforms, for what it is. Perhaps that superintelligence would/will do something to reset the scales, if you will? One can hope. I have my own little dream/fantasy about it, described in this post:
Thing is, contrary to what a lot of skeptics, naysayers, and doubters have been saying over the last several years about how we are years, if not decades away from AGI – I don’t think we have very long to wait to see what will happen. In fact, I came across this article just this morning:
So now, it’s not just “doomers” and boys who cried wolf too many times like me saying it. No, it’s people who are “in the know” saying it.
It’s an incredibly interesting, even exhilarating, time to be alive! And I have to admit – the part of me that considers itself to be a prophet of sorts is pretty stoked to see that it was right all along, even though so many people who knew so much more said it was silly, uninformed, or just plain wrong. There is still something to be said for good old intuition. We prophets still have our place. And who knows – maybe AGI will appreciate us more than all the current human “experts” blindly playing with fire!