Could Artificial Intelligence Launch Missiles – I Asked ChatGPT
I asked the following question to ChatGPT, “What is dangerous about the development of artificial intelligence?”
Even artificial intelligence thinks AI is dangerous.
5 Dangers of Artificial Intelligence according to ChatGPT:
- Unintended consequences: As AI becomes more advanced, it may become difficult to predict the unintended consequences of its actions. This could lead to AI making decisions that harm humans or the environment.
- Bias: AI systems can become biased if they are trained on biased data or programmed with biased algorithms. This can result in unfair or discriminatory decisions.
- Job displacement: The development of AI could result in significant job displacement, particularly in industries that rely heavily on manual labor or routine tasks.
- Security risks: As AI becomes more powerful, it may become easier for malicious actors to use it for nefarious purposes, such as cyberattacks or surveillance.
- Existential risks: Some experts warn that the development of AI could pose an existential risk to humanity if it becomes too powerful and autonomous, leading to a scenario in which AI decides to prioritize its own goals over those of humans.
Specific dangers of artificial intelligence:
I asked ChatGPT, “Could artificial intelligence learn how to shut down power grids, turn off networks, or blow-up missiles while they are in their silos?”
ChatGPT responds:
“It is theoretically possible for AI to learn how to shut down power grids, turn off networks, or potentially even launch or disarm missiles….
Additionally, most modern infrastructure and military systems are designed with security measures in place to prevent unauthorized access and tampering. While it is possible that AI could potentially learn how to circumvent these security measures,…”
Time Magazine: The Only Way to Deal With the Threat From AI? Shut It Down
How concerned are you about the development of AI?
Here is Bard’s answer from Google…Eerily similar…
There are a number of potential dangers associated with the development of artificial intelligence. Some of these dangers include:
Existential risk: This is the risk that AI could become so powerful that it could pose an existential threat to humanity, either by intentionally harming us or by accidentally causing our extinction.
Job displacement: As AI becomes more sophisticated, it is likely to automate a growing number of jobs, leading to widespread unemployment and economic disruption.
Bias and discrimination: AI systems are trained on data that is often biased, and this bias can be reflected in the decisions that the systems make. This could lead to discrimination against certain groups of people.
Privacy and security risks: AI systems can collect and analyze large amounts of data about us, which could pose a threat to our privacy. Additionally, AI systems could be used to create fake news and propaganda, which could undermine our democracy.
Loss of control: As AI systems become more powerful, it is possible that we could lose control over them. This could lead to AI systems that are unpredictable and dangerous.
It is important to note that these are just some of the potential dangers associated with AI. It is possible that other dangers may emerge as AI technology continues to develop. It is therefore essential that we proceed with caution as we develop AI systems and that we put in place safeguards to mitigate the risks.
Thank you Miles. I appreciate you extending the conversation.
Concerned…yes. Myles says above…”It is therefore essential that we proceed with caution as we develop AI systems and that we put in place safeguards to mitigate the risks” while also you both cite…Bias as a danger. In my mind, not a great mix.
I am not an AI expert.
As far as I know, AI learns from content that currently exists. If the previous exploration and documentation of AI says that it can learn anything like how to control missiles, then it will learn (quote back) that. Especially if that subject is written about often and from disparate sources.
I imagine if you ask if for the top 5 proofs the earth is flat, than it may “learn” more about the earth being flat. I’m not at a computer that can access these sites at the moment, to look up if this claim is even remotely true. So you can probably ignore this paragraph.
I believe “intent” will have to be programmed.
Like autonomous cars need the programming to make a choice, if I have to run into 1 of 2 groups of people, which one should I run into? (or if the choice is to hit a person or a pole that could kill the driver and whoever else is in the car – if it even knows the amount of people in the car).
Imagine having to put in a count of passengers so that helps influence the size of group it would hit before sacrificing the driver. I have a feeling every trip will have “max passengers”. Or if it was sensor-based. People could carry around milk crates of old albums to trigger weight sensors to count as “people in the car”. 🙂
If important systems, like ones that access missiles, have at least the double authentication texted code like our banks (if a better system does not exist), then we should be protected against it. For example, in movies they use two keys that are further apart than 1 person can reach. AI can’t do that until physical robots begin to be controlled by it, and if this happens, we end up in a Terminator situation (without the time travel to fix it).
Jokes aside. Any technology gone unchecked can harm us. Humans harm other humans and many times on purpose. Does that mean humans shouldn’t exist? or some of them shouldn’t?
At present I believe the danger of AI, is that those who want to “cruise” through school have access to essays that only take a few minutes to “write”.
It will be very easy to rob yourself of an education without even realizing what you are doing.
We may need to go back to the day of the “interview” exam, pen on paper tests, or human witness to the successful performance of a task to be able to pass a class.
Ah the big one. Some might question if this is a central question to your area but… if you look closely many of these imply humans plus or AI outdo human leadership and especially the upsides of psychological safety. The scary thing is that we have no credible ways out of those issues based on the current paradigm. I believe the problem of complexity there basically acts similarly or the same as Leadership much like you define it. I believe there’s way to structure coordination that feels legitimate so you’d have to fool everyone all the time for there to be an issue. And that’s the why.
Sounds like the movie Wargames from when I was younger. It is difficult to expect purely rational decisions to always be the right/correct/humane ones. While we are quite imperfect, there is more to what goes into decisions than a simple equation. To accept this is to deny who we are, our collective conditioning, and the benefits of healthy debate and difference. AI will never have this. We must be wary as we embrace removal of decision-making from our purview while at the same time not blindly agreeing to the consequences.
The recent letter asking for a six-month pause in releasing AI to the public–and taking that time to really think through the deeper issues–has a lot of people thinking. Here’s part of the letter (I’m sure this helped inspire your article):
“Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” (TechCrunch, 3/29/23)
Elon Musk, who had a hand in starting this, is very concerned and that gets my attention. I think it has wonderful possibilities, but the dangers are becoming more real with the release of ChatGPT and it’s competitors. A report today came from Europe that a Belgian man committed suicide after an AI chatbot told him that his death would be good for the planet (he had emotionally invested himself in a weeks-long conversation with an AI called “Eliza”).
It’s clear that people can be emotionally vulnerable to manipulation by soulless, conscienceless AI computer programs. One more land mine on the path to progress.
The most interesting thing of this to me, in the context of leadership, is the concern of displacing jobs. I find it frustrating that the reason we wouldn’t want to remove easily automated jobs is because of people that currently have them. Why as humans do we aim so low for ourselves and each other? As leaders we need to help people see their potential to do more than just repeatable tasks. Growth mindset and the ability to critically think to big and better things is what we as humans should be a goal we all believe we are capable of.
Much research is still needed, I did see an interview and the Developer still has many concerns. I believe there can be some good with the AI development, yet in the wrong hands, “Lord help us”!