麻豆国产

AI infrastructures are a national security and human safety issue, Mason professor says

Body

A team of George Mason 麻豆国产 researchers, led by Distinguished 麻豆国产 Professor , has received a $1.4 million grant from the Department of Defense to examine the way countries are implementing their national artificial intelligence infrastructure strategies.

Specifically, said Singh, who works out of Mason鈥檚 , his Minerva Project wants to understand 鈥渉ow preferences or interests from society, business, or other government actors shape policy in terms of what countries are doing with their national AI infrastructures.鈥

鈥淢any countries have official national AI strategies, and they鈥檙e usually announced by the government,鈥 Singh said. 鈥淏ut it鈥檚 unclear at whose behest those policies arise.鈥

Why is this research important?

AI systems at the ground level are built on data and the way these data are collected, because a machine can only learn from the data that goes in. The question is, whose data? If this is facial recognition, whose data went into it? Women may be excluded. If it鈥檚 mostly men鈥檚 data, certain ethnicities might be excluded. If we鈥檙e going to have an AI system that detects breast cancer, whose regional data went into that, what kind of groups? That鈥檚 why there鈥檚 that famous saying about garbage in, garbage out.

Why is this a national security issue?

鈥淎t the end of the day, whatever we do as human beings relates to how secure we are, so the way AI infrastructures evolve in a country can enhance security because you are able to surveil populations around the world and also stop intrusions on cyber infrastructures. That鈥檚 very much related to military-type security. But we are also examining security at a different level. What does it mean to be secure as a human being?

What does it mean to have human security?

Let鈥檚 imagine Society A. Militarily they might be secure, but people who belong to groups who, in practice, have fewer rights are not so secure. In India鈥檚 case, these may be lower-caste groups. In several Middle East countries, these may be women. In the United States, unfortunately, it may be minority groups. So we鈥檙e thinking about what security means for these groups. What does it mean for them to be represented through data, which would then be run on these machine-learning systems, because security in an AI sense would mean they are also represented.

What are the consequences for being in groups whose data is not being represented?

There may be people in the developing world with tropical diseases. But the sophisticated health systems being developed in the Global North may not have enough data about the diseases in the Global South, which may be about blindness from smoke鈥攂ecause people have no choice but to burn wood or coal, or diarrhea, or tuberculosis, or smallpox.

Yet you also call this data repository a double-edge sword.

You may not want your data to be out there. The consequence is that we need governance systems that guard against people鈥檚 data being exchanged freely. Right now, there鈥檚 a huge battle between the U.S. and the European Union about how data that sits on the cloud can be exchanged. In the U.S., by and large, the person who collects the data can then exchange it, as long as you have the informed consent of the person when the data was collected. The European Union position has been that every time the data gets exchanged, an additional set of constraints must be met.

What is your bottom line?

Humanities travel through several roads. We may have done ships, we may have done railways, we may have done roads. In the 21st century, our road is an artificial intelligence infrastructure. It鈥檚 very important to know if we can travel down that road. Just as in the past, people may not have the ability to get on a train, even if it passed through their village, we have to now figure if everybody is on the train of an artificial intelligence infrastructure, and if it is safe for them to do so.