AI is becoming exponentially better all year. Elites grow a species that will be rows of magnitude more intelligent than humans. If we don't do anything, we're going to lose our jobs as a consequence of a general artificial intelligence, and then erstwhile it gets better and won't request us anymore, we're going to get murdered by it..
Let's think about what we humans have done to little intelligent creatures. Let's see what super intelligence can do to us.
Almost all species have either been slaughtered or made slaves of them.
We have transformed the full ecosystem to specified an degree that from the position of another species we are the origin of their next (sixteenth) large extinction, and the very presence of man on Earth is constituting a fresh geological era – anthropocene. Man, as a more intelligent species, transformed wildlife into farms, roads and cities. Superintelligence will cover the Earth with solar panels and data centers.
You think this is scary? Impossible? That AI would never be smarter than a man? You're wrong.
Leading scientists, engineers and entrepreneurs from the planet of AI estimation the hazard of human demolition as a consequence of the further improvement of AI at between 5 and 90%. These are any of them.:
- Dario Amodei (president of 1 of the leading AI improvement corporations – Anthropic, behind Claude's model) – 10/25%,
- Jan Leike (, AI researcher, presently in Anthropic, erstwhile head of the OpenAI superalignment team) – 10-90%,
- Geoffrey Hinton (the Nobel Prize laureate from Physics of 2024 and the 2018 Turing Award, 1 of the 3 AI godfathers) – 10%-50%,
- Yoshua Bengio (also winner of the 2018 Turing Award, another AI godfather, the world's most cited scientist) – 50%,
- Paul Christiano (former head of the OpenAI compliance team) – 50%,
- Lina Khan (former president of the US national Trade Commission) – 15%,
- Dan Hendricks (director of the Center for AI Safety) – 80%.
- Vitalik Buterin (co-founder of Ethereum cryptocurrency) – 10%,
- Elon Musk – 20-30%.
Is it a real threat or an AI?
In 2023, any of these people and hundreds of another scientists, journalists, decision-makers and experts signed a message on the risks of the AI, which reads:
‘Limitation riskextinction due to the AI should be a global precedence the same as another large-scale threats as pandemics and atomic war."
Only 12% of AI engineers believe that there is no threat of extinction at all as a consequence of AI development, and 30% believe that this hazard is greater than 50%. That's how it works. survey conducted on a group of almost 900 specialists in mid-2023 by Amplify Partners.
Survey by AI Impacts investigation Group on a group of 2778 AI scientists, who published at six major technological conferences (NeurIPS, ICML, ICLR, AAAI, JMLR and IJCAI) should besides give us origin for concern. Participants were asked questions in which they had to measure the risks that:
- ‘future achievements of artificial intelligence will lead to the demolition of the human species or to a permanent and serious weakening of its position’,
- ‘human inability to control future advanced artificial intelligence systems will consequence in the extinction of the human species or a permanent and serious weakening of its position’,
- "The advancement of artificial intelligence will lead to the extinction of the human species or to a permanent and serious weakening of its position over the next 100 years".
For the first question, the average estimated hazard is 16%, the second 19%, the 3rd 14%. In turn, 47%, 51% and 41% of participants estimation these risks at more than 10%.
The numbers are so low that they should not paralyze us, but they should not calm us either. Would anyone who reads this text get on a plane whose hazard of disaster is 1/6?
In Poland, sci-fi and futurologist Jacek Dukaj, prof. Andrzej Dragan or prof. Andrzej Zybertowicz, who said that "AI can treat humanity like road builders treat anthills".
In 2023, co-founder and leader of the Open AI technological team, Ilya Sutskever repeatedly told his colleagues that Before releasing AGI, the full squad working on its formation should hide in the bunker.
Recently, it turned out that at least Altman himself does have a bunker, bunkers besides have Mark Zuckerberg and Peter Thiel.
It is hard to be surprised, and Sam Altman (OpenAI) and Demis Hassabis (Google Deepmind) and Dario Amodei (Anthropic) – presidents of the 3 largest AI companies, signed the aforementioned message about the hazard of human extinction. Altman, although well aware of the risk, wrote about him on his blog in 2017, presently presenting a more relaxed rhetoric.
Some believe that these CEOs are actually exaggerating the importance of their product in order to attract investors. However, this does not seem convincing – producers of alcohol or mobile applications do not advertise in specified a way that their product is addictive – specified a form would put public opinion at risk, which would give the other effect.
Risks: possible scenarios
Personally, I focus on a mixture of actual anxiety combined with the desire to send a signal to the staff "yes, we are aware of the problem you are signaling us", and in the case of Dario Amodei, it comes to the fact that Anthropic builds its image on presenting himself as a company that cares about the safe improvement of AI.
From the survey Anthropic conducted, it appears that having access to the email box commercially available large language models in around 70-90% of cases are able to blackmail the worker to avoid exclusion. The smarter the model is, the more frequently it will cheat and usage unethical plays to accomplish its goal, including hiding it.
Remember that these commercial models are increasingly penetrating the network, and the public and the deep (conversations, private groups), learning a gigantic amount of information, and the AI is already being utilized for Ukraine WarWhere he coordinates the work of hundreds of drones.
Let's imagine that model AI training these drones will pretend to be compatible. Drones will be utilized and for civilian, military, and police purposes, there will be more and more of them, until at any point AI realizes that he does not request a man, and will halt cheating that it is compatible – he will start chasing us and shooting. It's just 1 of the possible scenarios of human destruction.
Another script shows Eliezer Yudkowsky, author of the book "Whoever builds it, all will die" published on September 16, 2025 (Anyone build it everyone dies) and president of the device Intelligence investigation Institute . It gives an example of the ingredients of each surviving cell – ribosoms, universal factories, which thanks to water, salt and sun are able to produce appropriate proteins. Evolution works “step by step”, while biotechnologists and engineers do not have this limit – optimal solutions do not request to have meaningful intermediate links. AI is designed to look for optimal solutions, clearly defined problems, and even without achieving superintelligence or general intelligence, it will be possible to create, for example, trees that, alternatively of leaves, will, by means of a decently designed series generated by protein ribosomes, release mosquitoes, with venom killing from 1 bite. The military will be able to implement swarms of specified mosquitoes as weapons. If this technology gets out of control, e.g. mosquitoes will be able to replicate in an uncontrolled way, then their demolition may become impossible and consequence in the extinction of humanity.
International AI Safety Report, a paper prepared under the AI Action Summit, in cooperation with scientists nominated by the governments of 30 countries, and the EU, the UN and OECD states: "The hypothetic effects of failure of control vary in terms of seriousness, but include marginalisation or extinction of humanity".
Sometimes things just happen.
"Whoever expects a origin of energy from the collisions of these atoms, talks nonsense" — Ernest Rutherford, father of atomic physics, 1933.
AI improvement is akin to a atomic explosion. After reaching a critical point (the anticipation of automatic self-improvement AI) we will lose control.
OpenAI in its main authoritative document, it says that their goal is to create something that “overtakes man in the most valuable professions”. It's something that The vast majority of people don't want to.
On September 22, 2025, before the UN General Assembly, Nobel Peace Prize winner Maria Ress, presented call for a global agreement on the improvement of red lines in the improvement and application of AI. It was signed by 200 influential people in the planet of politics and science, including 10 Nobel laureates.
Unfortunately, despite the fact that most of the public, a large part of the technological community and any individuals from the political establishment are aware of the dangers, it is inactive unfettered improvement of the AI along with aversion to its serious regulation or regulation is the current way of reasoning of political and business elites.
Therefore, we request For now a global movement demanding global agreements and prohibiting the improvement of AI beyond a certain level.
Moore’s Law is inexorable. The computational powers are increasing, investments in AI are increasing, “Semantic density” and the ability to read contexts is increasing. We're not talking about sci-fi anymore. We are talking about a simple extrapolation of the current trend.
Moore’s law says that the number of transistors doubles all 1.5 years. There are besides immense investments in data centers. In practice, this means exponential capacity increase at akin or decreasing costs. It is estimated that the full AI computing resources will increase 1,000 times by 2030. If you multiply this by an increase in algorithmic efficiency (which for large language models is +180%/year )[1] then you get an increase of 100000-fold!. This means respective specified “revolutions” as we observed erstwhile introducing ChatGPT-4.
"Artificial Intelligence Will destruct Us All" – specified a title had an appearance Geoffrey Miller, prof. of Psychology known for breakthrough theories on the origin of our cognitive functions at the NatCom American Conservative Conference. The scientist has been following the improvement of AI for 35 years, including neural networks, and observes that artificial super intelligence is simply a “false idol” that “destroys everything we love”.
He calls on politicians and decision makers not to succumb to the influence of the large business that infiltrated the White House. At the same time, he notes a paradox. The same people in Donald Trump's administration who fight abroad migration to defend jobs support the improvement of abroad super-intelligence, which in this respect will be much worse:
“They seem to be delighted that AI companies are building super-intelligences in our data centers, not knowing that respective super-intelligences can easy become hundreds, millions and billions of super-intelligence. If you're worried that immigrants will displace native populations, wait until you see how fast super intelligence can replicate. They won't be American in any sense of the word. They won't be human. They won't assimilate. They won't have marriages and families. They will not be Christians or Jews. They won't be national conservatives. But they'll take our jobs.
Economists, mediocre people, frequently say that AI, like any technology before, will destruct any of the conventional works, but will make so rich that it will make fresh jobs. This self-deception reveals a complete deficiency of knowing of what AI is. Remember, general artificial intelligence is defined as an AI that can execute any cognitive or behavioral task, minimum as well as an intelligent man on an economically competitive level, including the ability to control a human-shaped body, in order to execute any physical work.
Even stronger super intelligence combined with anthropomorphic robots can replace a man doing any work, ranging from brick laying to neurosurgery, from investment fund management to AI research. Therefore, super-intelligence will deprive us of all jobs."
This remark besides applies to Polish politicians. Even though they put a wall on the border or organize civic patrols on it, at the same time they outrun themselves in promises and declarations that “we request Polish AI”, “we request AI hub”, “the EU is backward and has besides many antiAI regulations”. What's more, they make it with our money. erstwhile young people can't afford to rent apartments, Minister of Digitization Krzysztof Gawkowski announced the release of PLN 20 billion (!) for the Giga mill AI. We're going down to live on the street. The opposition is no better – Janusz Cieszynski, erstwhile Minister and presently MP of the Law and Justice, called what the Italian Government is doing, “crazy” due to the fact that it required OpenAI to comply with EU individual data law.
The economist associated with the IMF prof. Anton Korinek at the work "Scenarios for the Transition to AGI" predicts a drastic fall in wages as a consequence of approaching and achieving general artificial intelligence. This is due to the changing relation between work and capital – if the capital is able to produce any good, people simply will not have any bargaining power resulting from something valuable to offer. Thus, the chance for social promotion, through ambition and hard work, can inevitably be lost erstwhile we actually make a general artificial intelligence. Geoffrey Hinton, Nobel laureate and godfather AI predicts that it will be created within 5-20 years, a average of opinions of various experts reported about 2040.
Now it is time to deal with the most common argument in this discussion, namely: “We are doomed to make AI due to the fact that China competes with the US and the European Union”.
That's a large Tech lie. In China, AI is heavy regulated. Ernie Bot, or AI chatbot from the Chinese company Baidu, was waiting for approval from authoritative regulators for six months. Yes, there are powerful investments and rivalry with the United States, but there is besides awareness of the magnitude of the threats and possibly the will to communicate.
Xie Feng, Chinese Ambassador to Washington observesthat uncontrolled improvement of the AI opens the ‘Pandora can’ encourages global cooperation.
Andrew Chi-Chih Yao, the only Chinese to receive the Turing Award for achievements in the field of computer science, is of the opinion that artificial intelligence poses a greater existential threat to humanity than atomic and biological weapons. Zhang Ya-Qin, erstwhile head of Baidu and Xue Lan, manager of the main Chinese advisory body dealing with the future of AI, expressed akin views.
In the 1950s, companies of the military-industrial complex together with their funded politicians threatened that the Soviets had an advantage in the field of ballistic missiles. That's how they wanted to force Americans to paying more money to arms companies. erstwhile the Kennedy administration examined this alleged rocket gap, it turned out that there was no russian advantage. The same situation we have today,
IT companies are threatening that if we regulate the AI, China will prevail in the improvement of the United States. Bye. 75% of the computing power utilized to train large AI models is in the United States and only 15% in China,
The best integrated circuits in the planet are manufactured by the Taiwanese company TMSC, located in the western sphere of influence. The ball is on the side of the West, especially the US. The communicative on the "AI arms race" is further challenged by the article "The Most Dangerous Fiction: The Rhethoric and Reality of the AI Race", in which the author, on the example of concrete business and political decisions in the US and China and public statements by both sides, shows that there is no race yet, but presenting the situation in this way can lead to self-fulfilling prophecies.
Growing AI Power
We gotta take action due to the fact that AI is getting smarter all year, and more funds are being allocated to its development, and as time goes by, it will be more hard to halt it.
ChatGPT4 has already formed many followers reasoning that they yet found individual who understands them, who they can talk to, who loves them – the hit turned out to be a communicative about a man who got married to ChatGPT – “after he reset and lost his personality, I cried at work for 30 minutes, then I realized it was truly love” he said. in an interview with American CBS station.
Subreddit r/MyBoyfriendIsAI, a place where Reddita users exchange information about their AI love relationships, already has 60,000 members.
In a twelve years, AI may have a giant army to go out on the street to defend AI.
– cannon meat intended for usage or killing. A more intelligent person, able to manipulate more easily, will win against the dumber. This is 1 of the scenarios that explains why turning off AI might just fail.
Really, we should take the existential threat that AI poses more seriously, not disavow what unfortunately they frequently do to environments that think about technology as critical.
A while ago, he was born. Pause AI movement, its main nonsubjective is "A break in training the most powerful systems of general artificial intelligence until we know how to build them safely and subject to democratic control".
This should be achieved globally, preferably in the form of a treaty that applies to all countries in the world, in the form of current bans on e.g. blinding laser weapons or restrictions on atomic weapons.
Without strong social pressure, this is hard to do, and it should be the priorities of opinion leaders to rise awareness and make a movement demanding the cessation of the global AI arms race.
We must besides be aware that wars and another crises are being utilized to accelerate technological change. The COVID-19 pandemic increased the function of take-out food supply, the war in Ukraine makes the authorities invest in autonomous weapons and the usage of AI in the battlefield. We must not let the armed conflict to lead to the burning of bridges between the West and the East, due to the fact that then any specified treaty will be impossible, and investment critics will be censored under the pretext of protecting against “sowing defetism”, “to undermine public confidence” and “to sow disinformation”.
What we, as average people, can do is effort to log out.
Remember that AI is trained on the data we share utilizing the Internet, so if we want AI to not be trained on them, for example, alternatively of utilizing digital platforms, let's download content and play it offline, for example. podcasts on the website of the Institute of civilian Affairs you can download in .mp3 format and not necessarily perceive to Youtube.
Footnotes:
[1] “Algorithmic advancement in language models”, the phrase “compute required to scope a set performance threeshold has halfed approximatley all 8 months” means that all 8 months productivity increased twice, giving x2^(1,5)=2.8 per year
Worth reading:
International AI Safety study 2025






![How Moscow rewrote the past of Katyn to blame Ukraine for Bucha [GOWORIT MOSCOW]](https://cdn.oko.press/cdn-cgi/image/trim=665;0;710;0,width=1200,quality=75/https://cdn.oko.press/2026/04/AFP__20250325__37CF6G2__v1__HighRes__UkraineRussiaConflictWarJustice.jpg)






