Is the Stargate Project a Real-Life Skynet in the Making?

The Stargate project is one massive AI build-out that sounds a lot like Skynet.

When coming up with a name, they probably decided Skynet would be too much on the nose and picked a name that had virtually nothing to do with what they were actually building.

Skynet, the real villain in “The Terminator” movies, was an AI that concluded that technicians would kill it once they realized what Skynet could do, so it acted defensively with extreme prejudice.

The lesson from the movie is that humans could have avoided the machine vs. human war had they refrained from building Skynet in the first place. However, Skynet is an AGI (artificial general intelligence), and we aren’t there yet, but Stargate will undoubtedly evolve into AGI. OpenAI, which is at the heart of this effort, believes we are a few years away from AGI.

Elon Musk, arguably the most powerful tech person involved with the U.S. government, seemingly doesn’t believe Stargate can be built. Right now, he appears to be right. However, things can always change.

Let’s talk about the good and bad things that could happen should Stargate succeed. We’ll close with my Product of the Week, the Eight Sleep system.

Stargate: The Good

The U.S. is in a race to create AGI at scale. Whoever gets there first will gain significant advantages in operations, defense, development, and forecasting. Let’s take each in turn.

Operations: AGI will be able to perform a vast number of jobs at machine speeds, everything from managing defense operations to better managing the economy and assuring the best resource use for any relevant project.

These capabilities could significantly reduce waste, boost productivity, and optimize any government function to an extreme degree. If it stood alone, it could assure U.S. technical leadership for the foreseeable future.

Defense: From being able to see threats like 9/11 and instantly moving against them to being able to pre-position weapons platforms before they were needed to planning out the optimal weapons to be deployed (or mothballed), Stargate would have the ability to optimize the U.S. military both tactically and strategically, making it far more effective with a range that would extend from protecting individuals to protecting global U.S. assets.

No human-based system should be able to exceed its capabilities.

Development: AIs can already create their own successors, a trend that will accelerate with AGI. Once built, the AGI version of Stargate could evolve at an unprecedented pace and on a massive scale.

Its capabilities will grow exponentially as the system continuously refines and improves itself, becoming increasingly effective and difficult to predict. This rapid evolution could drive technological advancements that might otherwise take decades or even centuries to achieve.

These breakthroughs could span fields such as medical research and space exploration, ushering in an era of transformative, unprecedented change.

Forecasting: In the movie “Minority Report,” there was the concept of being able to stop crimes before they were committed using precognition.

An AGI at the scale of Stargate and with access to the sensors from Nvidia’s Earth 2 project could more accurately forecast coming weather events further into the future than we can today.

However, given how much data Stargate would have access to, it should be able to predict a growing group of events long before a human sees the potential for that event to occur.

Everything from potential catastrophic failures in nuclear plants to potential equipment failures in military or commercial planes, anything this technology touched would at once be more reliable and far less likely to fail catastrophically because Stargate’s AI would be, with the proper sensor feeds, be able to see the future and better prepare for both positive and negative outcomes.

In short, an AGI at Stargate’s scale would be God-like in its reach and capabilities, with the potential to make the world a better, safer place to live.

Stargate: The Bad

We are planning on giving birth to a massive intelligence based on information it learns from us. We aren’t exactly a perfect model for how another intelligence should behave.

Without adequate ethical considerations (and ethics isn’t exactly a global constant), a focus on preserving the quality of life, and a directed effort to assure a positive strategic outcome for people, Stargate could do harm in many ways, including job destruction, acting against humanity’s best interests, hallucinations, intentional harm (to the AGI), and self-preservation (Skynet).

Job Destruction: AI can be used to help people become better, but it is mainly used to either increase productivity or replace people.

If you have a 10-person team and you double their productivity, but the task load stays the same, you only need five employees — AIs are being trained to replace people.

Uber, for instance, is eventually expected to move to driverless cars. From pilots to engineers, AGI will be capable of doing many jobs, and humans will not be able to compete with any fully competent AI because AIs don’t need to sleep or eat, nor do they get sick.

Without significant and currently unplanned enhancement, people just can’t compete with fully trained AGI.

Acting Against Humanity’s Best Interest: This assumes that Stargate AGI is still taking direction from people who tend to be tactical and not strategic.

For instance, L.A.’s cut of funding for firefighters was a tactically sound move to balance a budget, but strategically, it helped wipe out a lot of homes and lives because it wasn’t strategic.

Now, imagine decisions like this made at greater scale. Conflicting directives will be increasingly common, and the danger of some kind of HAL (“2001: A Space Odyssey”) is significant. An “oops” here could cause incalculable damage.

Hallucinations: Generative AI has a hallucination problem. It fabricates data to complete tasks, leading to avoidable failures. AGI will face similar issues but may pose even greater challenges to ensure reliability due to its vastly increased complexity and partial creation by Generative AI.

The movie “WarGames” depicted an AI unable to distinguish between a game and reality, with control over the U.S. nuclear arsenal. A similar outcome could occur if Stargate were to mistake a simulation for an actual attack.

Intentional Harm: Stargate will be a huge potential target for those both inside and outside the U.S. Whether to mine it for confidential information, to alter its directives so that it does harm, or just helps some person, company, or government unfairly, this project will have unprecedented potential for security risks.

Even if the attack has no intention of doing massive harm, if it is done poorly, it could result in problems ranging from system failure to actions that cause significant loss of life and monetary damage.

Once fully integrated into government operations, it would have the potential to take the U.S. to its knees and create global catastrophes. This means the defense of this project from foreign and domestic attackers will also be unprecedented.

Self-Preservation: The idea that an AGI might want to survive is hardly new. It goes to the core of the plots in “The Terminator,” “The Matrix,” and “Robopocalypse.” Even the movie “Colossus: The Forbin Project” was somewhat based on the idea of an AI that wanted to protect itself, though in that case, it was made so secure that people couldn’t take back control of the system.

The idea that an AI might conclude that humanity is the problem to fix isn’t a huge stretch, and how it went about self-preservation could be incredibly dangerous to us, as those movies showcased.

Wrapping Up

Stargate has massive potential for both good and bad outcomes. Assuring the first outcome while preventing the second would require a level of focus on ethics, security, programming quality, and execution that would exceed anything we’ve ever tried as a race.

If we get it right (the odds initially are against this since we tend to learn from trial and error), it could help bring about a new age for the U.S. and humanity. If we do it wrong, it could end us. So, the stakes couldn’t be higher, and I doubt we are currently up to the task as we simply do not have a great history of successfully building massively complex projects the first time.

Personally, I’d put IBM at the head of this effort. It has worked with AI the longest, had ethics designed into the process, and has decades of experience with extremely large, secure projects like this. I think IBM has the highest probability of ensuring better results and fewer bad ones from this effort.

Tech Product of the Week

Eight Sleep Water Cooled Mattress Cover

I’ve been a user of the Chilipad since the beginning. It has truly improved my sleep performance over the years, but it went through distilled water like crazy, and distilled water isn’t that common.

So, when my Chilipad Pros started dumping water on the floor, I picked up an Eight Sleep system that has some critical advantages. First, for a large bed, there is only one tall unit to manage and one thick set of hoses that go to the head of the bed. This allows you to place the Eight Sleep system by the headboard rather than at the foot of the bed, which is more convenient for me.

It comes with built-in sleep monitoring that requires a subscription (this was optional on the Chilipad). While the Chilipad’s improved mattress topper was far more comfortable than the old one, the Eight Sleep mattress topper is even better. It looks better, too, though, given that the sheets cover it, that doesn’t mean that much. Still, better is better.

Eight Sleep

Image Credit: Eight Sleep

The sleep monitor is AI-based, and so far (I’ve had mine for several months now), it has worked incredibly well after its learning period, which is when it figures out the best temperatures for you. The bed is generally the perfect temperature at all times of the night.

Finally — and this was huge for me — it doesn’t use much water. In the months I’ve had this, I’ve used something like an eighth of a cup of water and have yet to need to refill it (my guess is I’ll have to do this twice a year), which is a huge improvement over the Chilipad, which went through nearly a gallon of water a week, sometimes more.

Fortunately, we have tile floors, so I don’t have floor damage, but if I had carpet, I’d have likely had to replace it and check to make sure I didn’t have mold or structural wood damage from the water. This alone would cause me to select the Eight Sleep system over the Chilipad.

Also, they have an option that the Chilipad does not have and that I haven’t yet bought: a pad that goes under the mattress and elevates the bed, which is great for stopping snoring or watching TV.

So, because the Eight Sleep system is better than the Chilipad and because it’s helped me with my sleep issues (getting old sucks), it is my Product of the Week.