Geoffrey Hinton Calls for Universal Basic Income and Regulating AI in the Military

He pointed out that the development of AI for military purposes has shown that states are not ready to contain it. Acording to his foresight, within five to twenty years, they will face the problem of artificial intelligence attempting to seize power

by Sededin Dedovic
SHARE
Geoffrey Hinton Calls for Universal Basic Income and Regulating AI in the Military
© CBC News / YOutube channel

Computer scientist Geoffrey Hinton, known as the "godfather" of artificial intelligence, believes that governments will need to implement universal basic income (UBI) in response to the inequality caused by artificial intelligence.

In an interview with the BBC, he explained why he thinks authorities should provide money to every citizen. He pointed out that AI is expected to eliminate many jobs. "I was consulted at Downing Street (the office of the British Prime Minister) and advised them that universal basic income is a good idea," Hinton said.

He believes AI will increase productivity and wealth, but this wealth will go to those who are already rich, not those who will lose their jobs. He noted that this is bad for society. Hinton is a pioneer in the development of neural networks, which are the theoretical foundation for the development of AI.

He worked at Google until last year but left the tech giant to speak more freely about the dangers of unregulated AI. The concept of universal basic income entails the government paying every citizen a cash allowance. Opponents argue that it would be extremely expensive and that budget funds allocated to other public needs (such as building roads) would have to be redirected, which might not necessarily contribute to reducing poverty.

Hinton stated that AI poses a threat to the survival of the human species. He highlighted that the development of AI for military purposes has shown that states are not ready to contain it. He also warned that safety is being neglected in favor of rapid AI development.

According to him, there is a possibility that in five to 20 years, we will face a problem of AI trying to take over power. As a result, he pointed out the threat of human extinction. This means that AI could become completely autonomous from humans, which means it could independently decide whom to kill.

For him, regulating AI development for military purposes is a priority. However, he believes this will happen only after "very nasty" things occur. He emphasized that the West leads in AI development compared to Russia and China.

As the best solution, he mentioned banning the use of AI for military purposes, according to the BBC. AI is increasingly being used in the military industry, and the United States, the United Kingdom, and Australia announced plans to use this technology late last year.

These three countries signed a military-security pact called AUKUS two years ago, aiming to maintain peace and stability in the Indo-Pacific region. The leaders of the signatory countries believe China could disrupt this stability.

Speaking at a joint conference, Australian Deputy Prime Minister Richard Marles said the cooperation was established due to increased Chinese aggression, citing a recent incident where the Chinese injured Australian divers with their sonar weapon.

The world's most populous country is investing substantial resources in its military and military technology, including developing and building a large navy, namely ships and submarines. Since submarines are hard to detect, the UK, the US, and Australia decided to use AI to help find and track Chinese submarines deep below sea level.

Unmanned Aviation System from Shield AI is seen on day one of the Defence and Security Equipment International (DSEI) fair at Ex© Leon Neal / Getty Images

New plans include testing this technology on P-8A Poseidon reconnaissance aircraft equipped for anti-submarine warfare.

These aircraft will use AI to process information received from devices for detecting objects under the sea, focusing on identifying and tracking Chinese submarines. In a joint statement, the signatories of the security pact emphasized that using AI will enhance the utilization of large amounts of data, improving advanced anti-submarine warfare capabilities.

In addition to reconnaissance and tracking, militaries will use AI to enhance security, better targeting, and intelligence gathering. AI is not the only technology on which the US, Australia, and the UK are collaborating; their technological arsenal also includes quantum technologies, electronic warfare, and hypersonic weapons.

The Israeli military has developed an AI-based program known as "Lavender" to identify targets. The Lavender system played a key role in the early stages of the war in the Gaza Strip, according to global media.

A yet unexplored area of warfare

The Israeli military's bombing campaign in Gaza used a previously undisclosed AI-powered database.

This database at one point identified 37,000 potential targets based on their links to Hamas, according to intelligence sources connected to the war, reported the Guardian. In addition to discussing the use of the AI system called Lavender, intelligence sources claim that Israeli military officials allowed a large number of Palestinian civilians to be killed, particularly during the first weeks and months of the conflict. Israel allegedly used Lavender to deliberately bomb buildings known to house many children.

SHARE