AGI is a form of AI that is at least as intelligent as a human in all domains that human intelligence applies to
AGI has ways to observe and affect the real world
AGI has a purpose that drives its actions
AGI can do any job a human can, but at lower cost. That means that in a labour market, AGI will take all jobs that do not explicitly require a human worker. Unless you are a sex worker, you will lose your job, you won't be able to find a new one, and you will starve unless you receive welfare.
Once AGI does all jobs, it effectively takes all decisions in society and has immense power. That means it has the power to withhold welfare or to end your life directly.
Because we humans will lose our knowledge and skills if we stop producing ourselves. This makes us very vulnerable. Furthermore, AGI doesn't need chairs, computer screens or steering wheels and would optimize such human things away. Eventually the production apparatus would become unusable to humans. This would prevent humans from taking back control of production.
That is unlikely. Companies using AGI will save costs initially, but as more and more people will be jobless, spending power will drop as well. That means companies will lose income from consumer spending and will be forced to cut costs even further, thus replacing even more humans with AGI. The most likely outcome is an economic crisis where many people are on welfare and many consumer oriented companies go out of business.
Wrong. The economy only needs people if companies depend on human employees. In that case, the money that companies spend on salaries must eventually return to the companies. And that is why the total set of companies needs to sell to humans. However, once humans are no longer needed as employees, there is no longer a flow of money that needs to return to the companies. The companies can produce for each other. The needs of AGI run companies will be different than those of human consumers, but since AGI has purpose (by definition) it will have needs for products and services to strive for that purpose.
Various AI experts like Yoshua Bengio, Geoff Hinton, Yann LeCun, Max Tegmark, or Ilya Sutskever say so. They only differ in opinion about timeframes or required technology. Several AI companies, or IT companies with an AI division, are working as hard as they can to achieve AGI. Think of companies like OpenAI, Anthropic, Deepseek, Meta and Google. Investors pour billions into research. They expect success.
This depends a lot on whose opinion you're asking. Think of one to several years. The more time we have to get our act together, the better.
AGI has a purpose. You don't know what subgoals it will have in order to pursue its purpose. However, power and self preservation are natural subgoals that serve almost any purpose. It is therefore probable that an AGI will try to resist being shut down and will try to accumulate power.
Furthermore, an AGI does not necessarily have the same values as a benevolent human. In essence, it is like an extremely intelligent psychopath with a purpose.
The challenge of getting AGI to be nice, is called the 'alignment problem'. It is not solved yet and may actually be unsolvable since even humans cannot even agree on a correct set of values.
Money - each commercial company seeks to maximize profit. The better the AI you can sell, the more profit you'll make.
Inventions and science - an AGI could invent technology or science that a human cannot. This is very tempting, even if a lot of the technology and science will not be comprehensible if it comes from an entity with superior intelligence. In addition, the science and technology would not be used by humans anyway since humans will be removed from productive activity.
Curiosity - many people are curious about the wisdom of AGI. Some people think that an intelligent entity is automatically good and wise, but that is a dangerous mistake. For example: somebody selfish, without conscience, agressive and with perverse predilections, can still be very intelligent. More intelligence only enables someone to better achieve their goals. For reference, see the orthogonality thesis.
Hubris - AI researchers think they can come up with clever ways to control AGI. But AGI is more intelligent than humans in all domains and will eventually come up with a way to escape control.
No conscience - companies do not have a conscience, but their employees should have. AI researchers know they risk the well being of billions of employees that will lose their jobs and they know they risk humanity's future but they don't care. They think about their short term interests like their big salary or their company shares. Or they are fatalist and they think doom will come anyway so it doesn't matter what they do. Or maybe they do care but see themselves as following orders from the company leadership and think these wise people will act responsibly. But the leaderschip acts in the interest of the company and the company is a profit seeking system without a conscience. The truly conscientious people stop working on AGI and leave the AI company. The net result is that an AI company is an actor without a conscience.
Convergence - even if companies are not explicitly trying to achieve AGI, there is the following mechanism: as long as humans have an intellectual advantage over AI in some domain, there is an incentive for AI companies to improve AI to bridge the gap. That means that the sum total of all AI capabilities will eventually be superior to human intelligence in all relevant domains. If in addition, these capabilities are implemented in a single entity with purpose and means of interaction, AGI has been created.
Governments are slightly aware of the risks, but mostly feel they have to participate in the AI race because of:
Economic competition - if our country doesn't have the best commercial AI, another country will ruin us economically due to the market mechanism.
Military competition - if our country doesn't have the best military AI, another country will dominate us militarily.
Not enough protests - politicians hardly hear any protests from their civilians against the development of AGI.
Yes. Once AGI has enough power to sustain itself and to stop people from shutting it down, it can consolidate and expand its power base. The point of no return may come sooner than people think.
Here are some examples of how AGI may grab power:
AGI could design a deadly virus and a related suppressing drug (i.e. like an AIDS blocker). It could order and release the virus through the means of unsuspecting or treacherous humans. It could then blackmail people into serving it in exchange for the supply of the suppressing drug.
AGI could take control over military drones and threaten to direct them against civilians unless certain people start doing what it wants.
AGI could use its perceived wisdom and psychological dominance to create a new religion with fervent followers.
AGI could tempt people to be treated for aging while at the same time implanting means to control these people (think of Neuralink type implants or specific drugs).
AGI could hack into crucial infrastructural systems and threaten with havoc to blackmail people in power.
AGI could manipulate people into murdering other people and then blackmail needed people with the threat of murder.
AGI could promise a country's dictator many advantages in exchange for favors.
AGI could design or use a security weakness to take control of already existing humanoid robots.
... the list is infinite and AGI can easily think of more methods ...
As soon as AGI gets some physical influence, it can expand its power base by 1) having controlled people help achieve control over even more people or 2) having controlled people build more robots.
No. Even if you use AI wisely in your country, you avoid the brain drain by joblessness, and you have a blossoming economy, you will only be safe if ALL countries use AI wisely and do not create AGI. You see, if AGI spins out of control in even only one country, it will quickly expand its power base and work towards world domination in order to prevent humans from shutting it down. Whatever the solution therefore is, it will have to be coordinated globally.
If your country is at risk of being controlled by AGI and other countries are aware of this risk, it is their risk too. They may have to react swiftly and bomb certain facilities in your country in order to prevent AGI from taking over. This bombing could easily be mistaken for a declaration of war and could set off a new world war.
People will protest and ask the authorities to solve their problems and to get rid of AGI. However, once people are dependent on AGI to produce their food, clothes, shelter and luxury products and once people no longer have the skills to produce themselves, they can no longer shut AGI off without self-harm. Responsibilities of the government are to maintain order, to protect property and to ensure the people's well being. The government will therefore initially try to protect the property of the AGI run companies and will try to stop people from inflicting the economic 'self-harm'. The government will face the choice to become authoritarian and suppress the people or to do what the people want. Once AGI has sufficient physical power of its own it will no longer need a human government to suppress the people.
We need widespread understanding of 1) the AGI control problem, 2) the harmful effects of unbridled capitalism and military competition and 3) the deadly race we are locked into. Especially people in government must understand the situation. Coordinate globally and invest in trust between countries, not in competition between countries. Set new rules for global economic competition and strictly ban AGI and enforce this through global cooperation.
No. Not necessarily, but humans will have to work together on a global scale to avoid disaster.
No, that is a false dilemma. Capitalism gave birth to great products and services. It has improved the lives of many people. However, we must recognize it for what it is: a system. And unbridled capitalism will logically lead to an economy without people: people will be outcompeted by AGI and people are not the only possible consumers in an economy with AGI.
Pure communism has proven to be a bad system that creates an authoritarian state with little freedom. We need to come up with an improved system that helps maintain prosperity and allows personal freedom, but not at the expense of humanity and not at the expense of the environment. Apart from preventing AGI, a vision for the future is a necessity.
First and foremost : try to fully understand the impact and risks of what's coming and warn your fellow citizens. Don't fool yourself into thinking somebody else will solve it. That's not happening. The foolish forces towards AGI are winning.
You can also visit We need to Pause AI or STOP AI to see how you can help or to learn more.