top of page

Ai: Progress vs. Safety



We've all heard the various doomsday scenarios: AI taking over the world, it becoming Skynet, or it leaving us jobless with nothing but (hopefully) Universal Basic Income to survive on. But what's really going on in the world of AI? Let's break it down and look at where we are, how we got here, and what it all means.

The Current State of AI

First off, let's be clear: we're not living in a sci-fi movie just yet. The AI we're using today - think ChatGPT, Claude, or Gemini - isn't the sentient, world-dominating force we've seen in films like Terminator or I, Robot. These are Large Language Models (LLMs), and while they're impressive, they're just one type of AI among many.

In fact, we've been using various forms of AI for years without even realizing it. Your iPhone predicting text, your photos app removing backgrounds, your smartwatch tracking your sleep - all of these use some form of AI. What's changed is that tools like ChatGPT have brought AI into the spotlight, making it a topic of everyday conversation, but a huge productivity tool.

I want to point out that as great as many of the LLMs are, they are even better at confidently outputting completely incorrect information. Some see this as lying; the proper term for it is "hallucinating". This happens for several reasons, like limited training data, poor prompts, basic errors, limited context windows and more. The bottom line is, we have come a long way and these tools are great, but they still make lots of mistakes and are severely limited.


We currently have Narrow AI; it's good at specific things, but not at generally all things. As time goes on, as new models and technologies are released and improved, we inch closer to AGI.

The Road Ahead: AGI and Superintelligence

When we talk about the future of AI, two key concepts come up: Artificial General Intelligence (AGI) and Superintelligence.

AGI is the next big milestone. It's when AI can perform tasks as well as or better than humans, with a general understanding that allows it to tackle a wide range of challenges. This is what companies are racing towards, and it's where things could start to get interesting in terms of how our society functions.

Superintelligence is the step beyond that - when AI becomes so advanced that it can improve itself, potentially at a rate far beyond human comprehension. This is where both the greatest promises and the biggest fears lie (getting closer to the Skynet and Age of Ultron scenarios - if done incorrectly).

The Saga of Drama at OpenAI - with a bit of speculation

The recent drama at OpenAI - with Sam Altman's firing and rehiring, and the departure of key figures like Ilya Sutskever (Sam Altman and Ilya Sutskever founded OpenAI together) - is a perfect example of the tension in the AI world right now.

On one hand, we have the push for innovation and product development, represented by figures like Sam Altman. They've done an incredible job of bringing AI into the mainstream and showing us its potential.


On the other hand, we have those focused on AI safety, like Ilya Sutskever. These researchers are concerned about the potential risks of advanced AI and want to ensure we're developing these technologies responsibly.

Both sides are crucial. We need innovation to push the boundaries of what's possible, but we also need safeguards to protect against potential catastrophic outcomes.

The Importance of AI Safety

Here's where I stand: I love the advancements we're seeing in AI. Tools like Claude and ChatGPT are genuinely impressive and even in their current state, have already improved our workflows and abilities in our daily lives; they also have the potential to solve major world problems. That being said, I also believe it's absolutely critical that we take AI safety seriously, at the very least as seriously as we take it's innovation.

We're dealing with technologies that we don't fully understand. Even the creators of these LLMs can't explain or understand exactly how the LLMs arrive at their outputs (responses). As we approach AGI and potentially superintelligence, the stakes get exponentially higher.

We need to consider questions like:

  • How do we ensure AI alignment with human values?

  • How do we prevent misuse by bad actors?

  • How do we maintain control over systems that might become smarter than us?

These aren't just plot points for sci-fi novels anymore. They're real concerns that need serious consideration.

A New Hope: Safe Superintelligence

While it's unfortunate that Sam Altman and Ilya Sutskever had to part ways, there's a silver lining to this situation. Ilya has now founded a new company called "Safe Superintelligence", dedicated solely to AI safety. This development is very promising and could be a game-changer in the field.

The creation of Safe Superintelligence emphasizes the crucial point that AI safety is not just an afterthought or a side project. It's a complex, critical issue that deserves focused, full-time attention from the brightest minds in the field.

However, this also raises some important questions especially when it comes to raising funding, satisfying shareholders, and turning a profit. How will a company focused purely on AI safety sustain itself financially? It's a shame that these vital efforts often come down to profitability - after all, that's part of what led OpenAI to shift from a non-profit to a "capped-profit" model, and arguably contributed to their increased focus on product development over safety research.

That said, this is the reality of the market-driven world we live in. My hope is that Safe Superintelligence will find a way to continue its crucial work without compromising its mission.

More importantly, I hope that the discoveries and progress made by Safe Superintelligence won't remain siloed within the company. We need these advancements to be shared and implemented across the AI industry. The race to AGI involves many companies and individuals working tirelessly, often in competition. But when it comes to safety, we need collaboration.

I hope for a future where the safety protocols developed by Ilya's team (or some other group dedicated to AI Ethics and Safety) become standard across the industry. Where every AI company, from tech giants to startups, incorporates these safeguards into their development process (and not just because of regulation). That's the kind of collaborative approach we need to ensure a safe path to AGI and beyond.

Wrapping up...

Will AI take over the world? Probably not. But could it radically reshape our society in ways we're not prepared for? Absolutely. That's why it's crucial that we approach this technology with both enthusiasm and caution.

As we continue down this path of rapid AI development, let's remember to balance our excitement for progress with a commitment to safety and responsible innovation. The future of AI is bright, but it's up to us to ensure it's a future that benefits all of humanity, and with efforts like Safe Superintelligence leading the way in AI safety research, we have reason to be cautiously optimistic about that future.


Comments


bottom of page