AI in War: Are we ready?

instytutsprawobywatelskich.pl 7 months ago

With Dr. Kaja Kowalczewska from the University of Wrocław, we are talking about whether we are ready to accept the killing of people by AI, which beeps in law and whether we can sleep peacefully.

(Interview is simply a edited and completed version of the podcast Are you aware? p. Will AI replace soldiers in killing? halt Killer Robots).

Kaja Kowalczewska

She is simply a associate of the Commission for the Dissemination of global Humanitarian Law, acting on the PCK, expert in the MCDC NATO-ACT programme "Counter Unmanned Independent Systems: Priorities. Policy. Future Capabilities" (2015), expert at the AI Law Tech Foundation. Her technological interests focus on global law, with peculiar emphasis on the ethical and legal aspects of fresh military technologies. presently co-founds the “Common defence policy – the legal framework for the improvement of the European defence industry” task funded by the Central European Academy in Budapest, where it deals with military robots.

"The targeting of people by autonomous weapon systems is an example of brutal digital dehumanisation. [...] deprives people of dignity, degrades human nature and eliminates or replaces human commitment or work to usage force by utilizing automated decision-making processes”, alarms the Coalition halt Killer Robots. What are the major ethical and legal challenges of integrating artificial intelligence into weapon systems?

Dr Kaja Kowalczewska: These are challenges known outside military discussions on how artificial intelligence affects our lives in general, only that in the context of armed conflicts they have far greater consequences, due to the fact that utilizing AI in decisions that will end in death or serious demolition can have far more severe consequences than only that we will be deceived or mark by advertising on social networks.

The discussion on the application of AI concerns first of all whether we want any of the decisions in the war to transmit artificial intelligence, and if so, whether we are aware of the challenges of utilizing artificial intelligence, which is expected to replace the human brain and our ability to measure the quality of what is happening around us.

On the another hand, we know that artificial intelligence is unforeseeable, burdened with a large mistake as to whether it is doing what we want it to do.

And if we're dealing with mistakes, then there's usually a question of who's going to be liable for them.

And with all the complexity of artificial intelligence, the way it is created, the usage of it for lawyers, the biggest challenge is who will be responsible, who will possibly be liable in war for crimes against humanity or genocide.

You wrote the book “Artificial Intelligence in War”. Which aspects of the military usage of the AI discussed there do you find most problematic, and why?

AI is applied everywhere.

We have examples from Israel, for example, where it is utilized to support decision-making processes, as it supports decision-making erstwhile recruiting workers. The problem begins erstwhile we decide that the human origin can be eliminated and let artificial intelligence simply make any decisions for us. And in war you frequently gotta decide about killing, about destruction.

Two crucial terms appear in the global discussion, in Polish terminology they are abbreviations of AWS and LAWS. Can you explain their meaning?

Terminology is complicated due to the fact that a large part of the authors of these debates are lawyers who love precise definitions. AWS means ‘autonomous waapons systems’, i.e. autonomous systems, by default operating on artificial intelligence. In this broad category, there are systems that aid logistics and transport, which aid analyse data, execute predictive analyses, find the amount of ammunition needed to execute an effective and economical attack. Their subgroup is LAWS, or autonomous weapon systems, just adding the word "lethal", or "death-like". These are systems whose primary task is to carry physical strength, which will origin death in war. The emphasis is that the full process is based on artificial intelligence. They can have different applications and be equipped with different types of weapons, so in addition, we don't truly know where automated systems end up, and they start autonomous, where any predictable algorithm ends up, and full artificial intelligence begins.

And so we do not know precisely which systems from the already existing systems can be classified as autonomous and whether we are talking about the far future in terms of artificial intelligence, especially the deadly one.

And how is the improvement of these systems to carry out war activities in the context of global law?

We have laws that tell us about a peculiar weapon, so there are treaties prohibiting the usage of chemical, biological, nuclear, cluster or anti-personnel weapons. At the moment, there is no treaty on these autonomous systems.

The discussion has been going on for many years. Last year, the UN Secretary-General said that 1 of his objectives was to guarantee that specified a treaty was signed by 2026 at the latest. I don't think there's a treaty like this at all. So we don't have any peculiar regulation. But that does not mean that the law does not apply to these systems at all if they were used. We have any general rules.

After planet War II, the Geneva Conventions and additional protocols were developed to them, which talk about what objects and groups of people are protected, whether we usage force from the AI or from a more conventional means. They besides include the duties of the fighting parties.

What is the biggest controversy?

The most harm can be caused by these deadly systems, which are utilized to engage the mark engagement, i.e. to identify, track and attack the opponent in accordance with the military targets adopted. And here the most crucial is the rule of differentiation, that is, whenever we direct an attack, we gotta separate between what is simply a military goal, a legal goal for which we are not responsible, and what is like the remainder of our universe. This is primarily about objects and civilians. This is the general regulation AI will gotta adapt to.

The question is how much AI can identify objects and individuals in a dynamic environment and decently qualify them for these groups, since this is simply a very large challenge for soldiers.

What's the second rule?

Then we have the rule of proportionality: each attack must be proportional, that is, the losses incurred in carrying out a given attack, the alleged collateral damage, must be justified by the direct military benefit derived from this attack.

We frequently forget this, although it seems to me that, due to the proximity of armed conflicts, we are a small bit better at knowing that not all demolition in property and civilian persons are war crimes.

These are only those where the rule of proportionality is affected consciously and with the intention of causing these disproportionate losses. It's not a simple mathematical expression that we can put into the AI, and it'll make us a proportional result. This is besides a challenge for military commanders who request to consider many elements erstwhile they decide to attack.

The 3rd rule is the precautionary principle, where erstwhile planning and conducting attacks, combatants must take into account the broader context and make qualitative assessments. AI may be better than us erstwhile it comes to quantitative analysis of data, but the qualitative evaluation of the information we specify – in the legal language, i.e. descriptive, qualitative, to be interpreted, is simply a large challenge for her.

What do these rules give us?

Systems will gotta comply with these principles. In my opinion, we can inactive sleep calmly, due to the fact that technology is not developed enough, but it is only a substance of time erstwhile it will prove better in qualitative assessment and it will be able to execute any tasks better not only from a military but besides from a legal point of view.

We are left with only an ethical dilemma: whether we want specified decisions to be made by artificial intelligence.

This is the lens of the legal debate on these systems.

Our conversation reminded me of Isaac Asimov's book "I a robot. 3 robotics rights". First law: a robot cannot harm a man or let a man to endure harm by failing to act. Two: the robot must obey man's orders unless they conflict with the first law. And three: a robot must defend itself, unless it is contrary to the law of the first and the second. Later Asimov added law zero: the robot cannot harm humanity or let humanity to endure harm by failing to act. How does that relate to reality?

I think ethics is simply a very different area than the law, connected to it in a certain way, but nevertheless distinct. These laws of Asimov sound so safe to us, a man is placed first here, and robots only service as guardians of good. I'm coming out of a different assumption. I wonder what Asimov would say if we put these robotics laws into the context of armed conflicts. The reality of war is simply a denial of any of these rights, and nevertheless wars are happening and will not cease, unfortunately. The benefits for us from them remain, in my opinion, purely theoretical.

What are the possible effects of handing control over combat decisions to global security?

I think there may be any uncontrollable consequences. As far as the current level of technological improvement is concerned, the consequence would be rather unpredictable, but if a large set of ethical rules and rules could be coded that should guide this global order, it might turn out that artificial intelligence is more moral than a state commander.

Since 1945 we have been banned from utilizing force in global relations, and we see that it does not necessarily work. So possibly artificial intelligence, which would have a much greater imperative of following these rules without changing them on its own, would be more ethical and would supply us with more security. This is simply a tough discussion, due to the fact that we don't have adequate data on how artificial intelligence works.

Where are the possibly armed conflicts in which the technologies you are talking about are being used?

With armed conflicts, only a layer of information reaches us. We have a conflict in Ukraine, in Gaza, and we have quite a few regional conflicts on virtually all continent. And it's not until after they're finished, if at all, that we'll know more about what was actually being used, due to the fact that it's clear that it's not in the interests of the fighting parties to show the planet all their aces up their sleeves.

Indeed, we hear of the fact that AI systems are being utilized in Ukraine, Israel or Gaza, so far as to support the decisions of human commanders, we are not dealing with these deadly systems yet.

As far as Israel was concerned, it was rather loud about the Gospel system, which in the media was presented as more powerful than it is in reality. Israeli troops are among the world's leading technologically developed troops and usage artificial intelligence from years to, for example, indicating military targets on the basis of pre-aggregated data on the location of certain locations critical to armed groups, to analyse intelligence data.

W Ukraine AI is besides utilized to anticipate the next steps of the opponent based on intelligence analysis, making decision-making processes in the military faster and more effective. We besides heard that an autonomous Turkish production drone was utilized in the conflict in Libya, while publically available information did not give a clear answer, whether it was an autonomous drone or a drone with any automated functions, which could, for example, fly for circumstantial coordinates and at the same time reduce its track due to atmospheric conditions.

It seems to me that autonomous systems are not yet active in the battlefields, but we are increasingly learning about non-lethal systems, specified as those that are not designed to kill and in another ways to hinder the conduct of armed actions while reducing human losses, minimising harm and expanding operational efficiency. These include drones and reconnaissance robots, systems utilizing AI to block enemy communication, or protecting the military network from hacking attacks. They are utilized in the military as in all aspect of our life, where data is increasing. And if the human origin is preserved, it can bring more benefits than losses.

Is it possible to balance the benefits of military usage of AI with the request to keep human control and guarantee compliance with ethics? And if so, how can this be achieved?

It should be possible, due to the fact that if not, you gotta halt the improvement of specified systems. This is something that we can't come up with a priori, and we can't figure out what we think it's going to look like due to the fact that these technologies are inactive developing.

From a legal point of view, the preservation of the human origin is an component that will supply us with at least a safety exception, due to the fact that possibly a military commander who decides to usage this kind of weapon will think twice if he knows individual will be responsible.

It is specified a traditional, dissuasive function of the law. We request a human origin to anchor this work somewhere. Only at what stage, since it is the state that makes the political decision to get any kind of weaponry, and then usually a private entity or a private-military consortium is involved, but surely not 1 person, as happens with AI, where work abruptly blurs.

Is a military commander liable who may not full realize how the strategy he will usage works?

From a legal point of view, the human origin must be attributed somewhere, and the decision on "where" belongs to the states that make the law. From an ethical point of view, it seems to me that this human origin will besides be a barrier to the full dehumanization of war, which is that we give the last bastion of human performance to machines. If we want to proceed surviving in civilization, where human dignity is the eventual value, then we must put this ethical and legal boundary.

The civilian Affairs Institute joined the "Stop Killer Robots" coalition, which seeks to establish a treaty prohibiting AWS and LAWS technology by 2026. What should the global community and decision-makers do so that this does not end in a black screenplay?

I think we've had adequate time to figure out the subject. Before the pandemic, the discussion matures to decision from ethical, technological, military, legal to political decisions. It should collapse in a forum that represents primarily military powers.

Because if we are talking about a treaty prohibiting the usage of atomic weapons, which was yet adopted, but has not been joined by states which have specified weapons in their arsenals, the effectiveness of specified a solution is questionable.

So on the 1 hand, you gotta take into account that there may be a treaty that looks beautiful on paper that will have zero translation into practice, due to the fact that it will only be signed by countries that are far behind the military powers of developing technologies that will let to make LAWS and AWS.

On the another hand, countries should decide whether they are going into a full ban or a more nuanced regulation, where the origin of human control should be taken into account and show that certain systems based on AI can only be allowed to a certain point.

The United States, Israel, Russia, China, India – the leading countries in these artificial intelligence technologies – may not be willing to sign specified a treaty. We must besides take into account that most of the countries in the European Union belong to NATO, and there have been times erstwhile all EU countries sign a treaty, the United States does not.

What do we say?

W At the legal stage, the Council of Europe has committed itself to greater protection of human rights than the remainder of the world. And I think that this is firmly rooted in the moral obligations of states to address ethical and legal challenges.

In September, the Presidency of the European Council will take over Hungary. 1 of their priorities, given the geopolitical situation, is to find how countries are to regulate the usage of fresh technologies. Military robots, whether remotely controlled or endowed with artificial intelligence, are only 1 component of this project.

I think it is good that a country takes on the function of a regional leader, due to the fact that without specified leaders no treaties would be signed.

It seems to me that this initiative is simply a good direction, but it needs to be carefully balanced not to make a theoretical blow, but a solution that will actually affect armies utilizing artificial intelligence, and at the same time it will not destruct the full humanitarian acquis that we have developed after planet War II.

International issues, and especially the regulation of armed conflicts, are a very complicated subject where many aspects request to be taken into account. That is why a large initiative is that we are trying to have an interdisciplinary approach, that lawyers, ethical and military, who are at the same time experts in military technologies, are talking to each other, due to the fact that that is the only way we can make effective and implement recommendations. I support any effort to convince our government that this should be 1 of the priorities of our abroad policy.

Work!

When we enter into dialog with the various institutions of our country and ask about the issues we are discussing today, there is no 1 to talk to. No 1 knows anything, and it's alarming. All the more reason to be amazed that Hungarians have planned this in advance.

Especially since Poland accepts this Presidency immediately after Hungary. So time is now.

Thank you for talking to me.

Read Entire Article