How deregulation, drones and digital oligarchs derail the 21st century

dakowski.pl 1 month ago

Günther Burbach's article.https://apolut.net/maschinen-an-die-macht

How deregulation, drones and digital oligarchs derail the 21st century

== sync, corrected by elderman ==

Ukraine is not the perpetrator, but the scene. The suffering of the population, the existential defence struggle, all of which is instrumentalized to make and test weapons technology that can shortly be utilized in another conflicts. The US, the United Kingdom, Israel and private arms companies see this war not only as a geopolitical project, but besides as an chance for accelerated marketplace readiness.

== sync, corrected by elderman ==

Artificial intelligence, robotics and military autonomy combine into a fresh form of power

== sync, corrected by elderman ==

Imagine a scene that no longer takes place in the discipline fiction film, but in the actual battlefields of our time: a two-meter mechanical robot with a humanoid figure marches through a bombed industrial district. It detects movements behind the wall, scans the interior space with a thermal imaging sensor, calculates the hazard probability of 78 percent utilizing a neuron module and opens fire. No radio contact. No orders. Without human hesitation.

This painting is inactive a fiction. But not for long.

Because the technological, political and economical conditions for specified scenes to become reality were created long ago. Artificial intelligence, robotics and military autonomy merge into a fresh form of power, power that is no longer legitimized by states or constitutions, but alternatively by server farms, satellite constellations and deregulation decrees.

The 3 events are like strategy modes that are increasingly eluded by human influence:

  1. The digital infrastructure of Elona Muska, in peculiar the Starlink satellite network and the humanoid Tesla robot Optimus, are already the backbone of military communication and automation in conflict zones specified as Ukraine.
  1. Autonomous weapons systems, in peculiar drones with mark designation and decision-making modules AI, are no longer prototypes, but a reality on the battlefield, frequently unnoticed and frequently uncontrollable.
  1. Political deregulation of AI Donald Trump, as set out in the Implementing Regulation of January 2025, can make a regulatory vacuum in which technologies will be implemented globally without moral barriers to protection.

What does that mean? That as a society we face a technological upheaval that affects not only our planet of work or social media, but besides the basic questions of human existence: Who decides life and death? Who is liable erstwhile algorithms make mistakes? And what happens erstwhile the tools of power are in the hands of those who no longer have democratic control?

This article examines these 3 lines of improvement not in their theoretical capacity, but in their actual implementation, with reliable sources, method examples and a clear intention: to inform before it is besides late.

While the talk show is inactive debating heating laws, The fresh reality is programmed in secret and does not know the order to withdraw.

The future of war is not being prepared — it has been practiced for a long timeIt’s okay. ”

Digital War device — How Starlink and Robots Muska Change Front

When people are talking about Elon Musku today, many think about Tesla, rocket launches or, if that's the case, any obscure tweets. However, what is unsound in public is the fact that Musk has long been a player in a global war, not metaphorically, but in reality. And not as a politician or general, but as an infrastructure provider. Thanks to its Starlink satellite network, he quietly ejected into a key position previously reserved only for states. Without Starlink, the Ukrainian army would be mostly digitally blind, even though officially neither the US nor NATO are straight involved.

Starlink fundamentally acts like a digital tense system. Currently, there are over 5500 satellites in orbit, and this number is increasing. Musk wants to grow it to over 40,000. Each of these satellites is part of a network that supplies the net to ground stations, regardless of cables, servers or national telecommunications infrastructure. Starlink has been utilized militarily in Ukraine since 2022. Initially, the aim was to stabilise the civilian sector. However, this strategy has long been utilized to control drones, let real-time communication between front and command, and coordinate attacks. Basically, Starlink is the backbone of Kiev's digital war.

And Musk? He has control of the network, both technically and politically.

In February 2023, he first blocked certain Kiev military demands specified as attacks on Russian warships through Starlink data. This may sound reasonable in terms of deescalation, but it shows 1 thing first: A single entrepreneur can now decide which army can communicate and which 1 can not. Imagine that a telecom company would decide to block allied radio connections for moral reasons during planet War II.

Unthinkable. Today: reality.

At the same time Tesla is working on a task that raises even bigger questions: the humanoid robot "Optimus". It is inactive advertised as an intelligent helper in household and production.

Musk showed him: On stage, carrying the box, sorting the screws, balancing the egg in the kitchen—a perfect PR catch.

But what is truly this 1.73-foot robot: a platform. Like a smartphone on which you can download any application, only with legs, gripping hands and 200 newtonometers of arm strength. The company is shortly promising the mass production of Optimus. Tens of thousands of pieces a year.

Officially, he's expected to kind packages or aid seniors. But anyone who knows this technology knows: Optimus can besides walk, keep balance, admit objects, and make decisions. All of this is based on neural networks, device learning and real-time data processed by thousands of cameras and sensors. And if specified a strategy worked in a factory, why not in a tunnel in Bakhmut? This technology does not separate between home work and urban war.

What's crucial is what you teach the software. The cleaning robot recognizes what's dirty. A combat robot recognizes what's dangerous. And if the decision about what is dangerous is left to the learning system, We are short of machines killing themselves.

You could protest that it is unlikely. Optimus isn't working yet. It's inactive a distant dream. But the point is, in a liberalized environment, that's Trump's policy. (more about this later) It will not take 20 years, but possibly three. And in these 3 years, a country like Ukraine is already investigating drones controlled by artificial intelligence in actual operation. This is not an argument against Ukraine. This is an argument against the naivety with which we perceive technological realities.

Optimus doesn't should be armed to be dangerous. It is adequate that it will act as an autonomous system, i.e. without constant human control. The question of whether a robot should pull a soldier out of a fire or hand him over to an enemy may be a substance of life and death.

And no one's liable if anything goes wrong. No officer, no programmer, no server.

However, the combination of systems is even more serious: Starlink provides digital infrastructure, Optimus physical platform. Additionally, AI models specified as GPT-5 or peculiar Military Large Language Models (LLM) that train decision-making schemes based on thousands of scenarios and war data from the real world. The volume of data Musk hopes to get with Optimus could in turn be utilized to train additional AI models. And these models can then — and this is not speculation, but alternatively the current state of improvement — identify targets, calculate hazard analyses and issue operational proposals. In real time. Autonomously.

You don't request to be paranoid to admit what's happening here: a digital-military device that already partially functions and whose full automation was initiated a long time ago. Not by parliaments or generals, but by technological giants and investors.

The problem is not that this technology exists. The problem is, he's out of control. When a satellite network like Starlink affects the course of a war, erstwhile a humanoid robot can be reprogrammed at any time erstwhile algorithms independently set mark priorities, that is not progress. It's a turning point.

Invisible weapons – autonomous systems and artificial intelligence at war: test in Ukraine

What was erstwhile shown only at military exhibitions or calculated in Pentagon papers is now being tested in real time, on Ukrainian soil, frequently entirely hidden from public opinion. Not only tanks, rockets and western weapons technology are constantly flowing into the war area, but besides a fresh generation of automated systems: drones, robots and air defence modules, any equipped with learning algorithms, any entirely autonomous.

"Autonomous" in this context does not simply mean without distant control. This means that the strategy decides whether something is recognized as a target, whether there is simply a threat and whether to shoot or not.

The fact that specified technologies are in usage has been confirmed not only by journalists and specialist analysts, but now besides by Western defence partners. Ukraine has long become an experimental ground for AI-assisted weapons technology. What works there is later exported, whether militarily, economically or politically.

Especially known case is the so-called. ‘Saker Scout’, a drone designed in collaboration with British inventors, among others, and now equipped with an AI module, which can detect ground targets utilizing alleged deep learning. The strategy analyses infrared data and aerial images, compares them with training data from erstwhile missions and categorizes whether targets are enemy positions, vehicles or civilian infrastructure. The hit rate is frighteningly advanced and the mistake tolerance is alarmingly low.

When the object is classified as a possible target, the drone may mark it for subsequent demolition by artillery or attack straight if it is equipped with explosives. A human consult? Not necessarily. any systems operate in alleged "shoot and forget" mode: they receive an approximate search area, and the algorithm deals with the rest.

The first autonomous systems are besides implemented on the ground. The British Ministry of Defence confirmed the transportation of ‘SkyKnight’ systems to Ukraine in 2024. These are mobile air defence modules with artificial intelligence-assisted mark detection. Operating principle: 360° radar analyses trajectory of incoming objects. The software determines whether it is an enemy drone, a bird, or 1 of the company's own helicopters, and fires in a fraction of a second. The full strategy is coupled with a neural network making decisions, trained in simulated combat scenarios.

Similar technologies have already been tested in Israel utilizing the celebrated "Harop" drone, the alleged "circulating ammunition", which can circulate over the mark area for up to six hours before hitting and destroying it. Meanwhile, Western European and east European arms companies are working on akin concepts. Control is becoming increasingly automated. It's not people who decide erstwhile to launch the attack anymore, it's a set of training data.

Another disturbing example: termite drones filled with flammable metallic that intentionally set fire to enemy positions. These drones, any equipped with autonomous targeting algorithms, can paralyze full trenches or supply lines. Thermal demolition is common and hard to quench. It's neither precise nor surgical, but psychologically violent.

These achievements only receive limited technological support. While organisations specified as run to halt Killer Robots and UNIDIR Institute (UN Institute for Disarmament Research) have for years been informing against autonomous war, governments, especially under the force of current conflicts, have been operating in a grey area. For understandable reasons, Ukraine barely discloses any information about its systems. Producers trust on business secrets. And the Western sponsors? They barely talk about it or call it "innovation."

Technically, much is based on alleged shore processing, which is the computational power that takes place straight in the device alternatively than on the central server. This makes systems faster, more communicative, but besides little controllable. Autopilot in the car is easy to track. The AI tracking algorithm in the FPV drone chip, which explodes in a trench 300 km behind the front line, cannot.

In May 2025, the Center for a fresh American safety (CNAS) study warned against this improvement of the situation.

Title: “The chain of killings breaks free”.

The study examines how AI systems increasingly make their own decision chains without human supervision. The authors talk about "erosion of military responsibility" and "fundamental challenges to global law". erstwhile programmed, the moral compass of the device is limited to what was taught.

What happens if the data is wrong? What if the strategy confuses civilians with those trained in combat? If the algorithm classifies the heat signature as a threat due to the fact that a akin pattern has previously led to a hit?

Responsibility becomes a black box in specified cases. No 1 knows precisely how the algorithm made the decision; it can only be reconstructed retrospectively – if at all.

Ukraine is not the perpetrator, but the scene. The suffering of the population, the existential defence struggle, all of which is instrumentalized to make and test weapons technology that can shortly be utilized in another conflicts. The US, the United Kingdom, Israel and private arms companies see this war not only as a geopolitical project, but besides as an chance for accelerated marketplace readiness.

It is tested here with nothing but a transition from a controlled to a self-determination war machine. And the consequences are not yet predictable.

Trump Deregulation – AI without seat belts

When it comes to the danger posed by artificial intelligence today, many think of fantasies about Terminators or intelligent machines that 1 day could take over the world. However, barely anyone realises that the legal basis to prevent uncontrolled usage of specified systems is presently being dismantled at a fast pace, especially in the US. And 1 of the most crucial drivers of this improvement is Donald J. Trump.

23 January 2025, just 3 weeks after office, Trump signed a package of measures without precedent in US regulatory history: alleged "Implementing Regulation 10 to 1". The essence of this: 10 existing regulations should be repealed for each fresh government regulation. This sounds like administrative simplification, but in fact it is simply a general attack on all forms of government supervision, especially in the technology sector.

The White home Information Card attached states:

America will be the planet leader in artificial intelligence. Regulation must not be a brake on freedom and innovation". (Source: whitehouse.gov, 23 January 2025)

What sounds like an innovation promise means in practice: AI systems can be developed, disseminated or even transferred to military contexts with far little regulatory control in the future, without ethical committees, without hazard analysis and without transparent standards.

Among them is the Blueprint for an AI Bill of Rights, a set of Biden administration principles to guarantee human rights protection for delicate AI applications. This document, which was never legally binding, was immediately named by Trump's advisors “a growth brake”. The Defence Innovation Unit guidelines, which were to supply critical oversight of autonomous defence systems, are besides to be removed.

What does that mean in a circumstantial sense? That artificial intelligence for military purposes, for example for the intent of choosing, identifying patterns or assisting decisions, can be developed in the future without any work to guarantee traceability. And it doesn't happen in a vacuum.

Large corporations specified as Palantir, Anduril, Lockheed Martin and OpenAI (military division) have long been ready to integrate the alleged "predictive targeting modules" into drone systems or command and control platforms.

Technically speaking, this means that language models specified as GPT-5 (and its military variants) can simulate full operational scenarios on instruction, set priorities for possible targets and urge tactical decisions based on large sets of data, and even straight intervene in decision-making processes.

Example: The AI strategy analyses the last 72 hours of enemy movements, recognizes patterns in the deployment of dense artillery, creates a probability map of the next attack and on this basis generates a suggestion, which coordinates should be chosen as the mark utilizing precise weapons. Without human judgment, without contextual knowledge, based solely on mathematical weighing. The decision to attack then lies either in the hands of a tired commander or in the future in the hands of the strategy itself.

Trump himself commented on the subject at the March 2025 run event as follows:

We don't request left-wing moral codes for machines. We request results. America cannot lose due to the fact that bureaucrats fear the future”.

Its nonsubjective is clear: technological leadership at all costs. Risk? Is it ignored? Ethical debates? Rejected as “left and extremist scare.”

And that's dangerous due to the fact that the combination of deregulation and military interests is an explosive mixture. Companies that had previously hesitated to marketplace systems with a higher level of autonomy due to regulatory obstacles now feel bold. Moral work is transferred to the client or incorporated straight into the product.

The interior study of the Center for Humanitarian Technology, revealed in February 2025, urgently warns:

‘Jesli will vanish the regulatory safety network, global proliferation will appear (cell proliferation The crowd.) algorithmic violence. Whoever first implements wins and who warns loses marketplace share".

Especially explosive: these events take place at the same time with massive investments in AI infrastructure, including the alleged Stargate Project, a $500 billion consortium consisting of OpenAI, Oracle and SoftBank, which builds a fresh global AI infrastructure. Target: 100,000 H100 graphics processors, deployed in 10 megacenters, controlled by their own quasi-internet. The technological power that drives all national legislator forward.

There are inactive global bodies dealing with these events, specified as UNESCO. However, their influence is limited. And as long as the global power, specified as the US, is actively deregulation, everyone else will be forced to follow in their footsteps or separate from global competition.

The final consequence is improvement that may have started with the best intentions, but now it is getting out of control: machines that think, work and kill independently. Driven by political shortsightedness, economical force and willingness to sacrifice ethical principles for the sake of progress.

Perspectives and warnings – erstwhile the strategy decides and no 1 can halt it

It doesn't start with an explosion. It starts with data transmission. With a satellite that transmits data. With a drone that decides if the object should be classified as a threat. With a robot that doesn't turn around anymore erstwhile individual screams.

What we have seen in fresh months and what we have described in this article is not technological advancement in the classical sense. It's a change of power. From people. Responsibility. Towards machines that know more about us with each decision, they learn more about us and yet decide what will happen to us.

This improvement has 3 faces:

The first is Elon Musk.Or alternatively what he represents. Not just a man with satellites and robots, but a technological complex that has liberated itself from politics. Anyone who can decide the course of war today, filtering the connections or excluding their services, has more power than parliament. And anyone who simultaneously builds robots that walk, capture, and act, and then claims to be home helpers, may not tell the full truth.

The second face is an autonomous weapon. What started erstwhile with remote-controlled missiles is now a self-learning strategy that determines where to launch based on mark images, example data and computational models. Ukraine shows us how specified technology works "in emergency situations". And it besides shows how quietly this threshold was crossed. No debate. No ticket. No plan B.

The 3rd face is Donald Trump and what his policy is causing all over the world: the failure of all barriers. erstwhile the rules are repealed due to the fact that they "slow down innovation" erstwhile investigation on artificial intelligence no longer requires ethical supervision, erstwhile it is allowed to build immense models that nobody full understands, this is not a step forward. It's losing control in real time.

What, then, will result?

Perhaps not the immediate serious collapse of the system. possibly not a march of machines. But the devaluation of human decision making continued. Slow erosion of responsibility, the regulation of law, transparency. And finally, a fresh normality: the question is not whether the strategy is allowed to kill, but only how reliable it is.

The tragic thing is, we've seen it all. We made movies about it, wrote books, and started investigation projects. But now that things become circumstantial and people in Kiev fly drones through Starlink, humanoid robots enter serial production in Texas, and an emergency regulatory shutdown has been lifted in Washington, we like to watch election polls, football games and inflation.

But those who now look distant will not be able to say that they did not know.

Good news: there's inactive time. Time to talk about artificial intelligence, not only in the jargon of Silicon Valley, but politically, legally and ethically. It is time to formulate actual principles, global, binding and transparent. It's time to regain work from those who algorithmically reject it.

But this time is moving out. due to the fact that machines don't have patience. They just keep calculating.

Final Appeal:

We don't live in dystopia. Not yet. But we are going in this direction if we proceed to believe that advancement is inherently good, that technology is neutral, that systems are smarter than people. possibly it's time to halt being amazed and yet talk up. due to the fact that those who stay silent erstwhile machines learn to kill are complicit.

Günther Burbach

== sync, corrected by elderman == Paul James, 1 Hail Mary for my work.

The article was published on May 30, 2025 on: https://apolut.net/maschinen-an-die-macht/

Sources and footnotes:

I. Trumps Deregularungspolitik

Quelle 1: White home (Amtssitz des US-Präsidenten), Veröffentlicht am 23. January 2025
https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-launches-massive-10-to-1-deregulation-initiative/

Quelle 2: White House, Veröffentlicht am 23. January 2025
https://www.whitehouse.gov/presidential-actions/2025/01/unleaching-prosperity-through-deregulation/

II. Autonome Waffensysteme im Ukraine-Krieg

Quelle 3: Forbes (US-Wirtschaftsmagazin), Veröffentlicht am 17. Oktober 2023 https://www.forbes.com/sites/davidhambling/2023/10/17/Ukrainians-ai-drones-seek-and-attack-Russian-forces-without-human-oversight/

Quelle 4: Automated Decision investigation (Forschungsplattform für autonomous Systeme), Abrufbar seit 2024
https://automatedresearch.org/weapon/saker-scout-uav/

Quelle 5: UNITED24 Media (offizielles ukrainisches Medienprojekt unter Präsidentschaft von Wolodymyr Selenskyj), Veröffentlicht am 14. April 2024
https://united24media.com/war-in-ukraine/ai-powered-turret-that-hunts-Russian-drones-meet-sky-sentinel-ukraines-new-air-defense-8589

Quelle 6: Wikipedia (nur zur technischen Einordnung vervendet – nicht als Primärquelle!), Stand: abgerufen Mai 2025
https://en.wikipedia.org/wiki/IAI_Harop

Quelle 7: "Investing in Great-Power Competition" (CNAS)
https://www.cnas.org/publications/reports/investing-in-great-power-competition

III. Elon Musk, Starlink und Robotik

Quelle 8: Financial Times (britische Wirtschaftszeitung), Veröffentlicht am 21. Januar 2025
https://www.ft.com/content/a9cd130f-f6bf-4750-98cc-19d87394e657

Quelle 9: TechCrunch (US-Technachrichtenplattform), Veröffentlicht am 21. January 2025
https://techcrunch.com/2025/01/21/openai-teams-up-with-softbank-and-oracle-on-50b-data-center-project/

Quelle 10: OpenAI (offizielle Firmenwebsite), Veröffentlicht am 21. January 2025
https://openai.com/index/announce-the-stargate-project/

Quelle 11: Times of India (indische Tageszeitung, Technologiessort), Veröffentlicht am 28. February 2025
https://timesofindia.indiatimes.com/technology/social/elon-musk-provides-an-update-on-teslas-optimus-robot-there-will-be-a/articleshow/119912214.cms

Quelle 12: The Sun (britisches Boulevardblatt, Technik-Rubrik), Veröffentlicht am 16. März 2025
https://www.thesun.co.uk/tech/31012254/moment-elon-musk-reveals-tesla-optimus-robots/

Read Entire Article