Tuesday, February 26, 2019

Another AI Exaggeration: It Can Say Why There Is Religious Conflict

Dear reader, though you won't believe it, there is such a thing as the Journal of Artificial Societies and Social Simulation. When you search its name, Google shows you this:

The Journal of Artificial Societies and Social Simulation is a quarterly peer-reviewed academic journal created by Nigel Gilbert. The current editor is Flaminio Squazzoni. The journal publishes articles in computational sociology, social simulation, complexity science, and artificial societies

I don't know, but I would not be surprised to learn the comically named Flaminio Squazzoni is (him-)itself the result of an AI prank.

Anyway, the journal has such papers as "An Agent-Based Model of Rural Households' Adaptation to Climate Change", "Innovation and Employment: An Agent-Based Approach", "Methodological Investigations in Agent-Based Modelling", and "Agent-Based Agent-Basing: An Agent-Based Approach." I'm kidding about the last one.

A real one is "A Generative Model of the Mutual Escalation of Anxiety Between Religious Groups" by the beautifully named F. LeRon Shults and others. If F. LeRon Shults doesn't double as a famous Baptist preacher, he's missed a huge opportunity.

What about the paper?

We propose a generative agent-based model of the emergence and escalation of xenophobic anxiety in which individuals from two different religious groups encounter various hazards within an artificial society.

That's some serious agent-basing.

Now academics are constantly inventing puzzles for themselves to solve. They have to have something to write about, and, in truth, it really is publish or perish. It doesn't really matter if the puzzles have nothing to do with anything; it only matters if enough academics can be gathered together to call their puzzle a subject.

So we can't find fault in painting what are glorified video games with a scholarly patina. We recognize artificial societies are not real societies, and we understand what happens in a video game is only interesting within that video game. We know that an artificial society being run on a computer and given the name Artificial Intelligence does not mean it has any bearing or relation to non-artificial societies. We know this because whatever comes out of algorithm is only what is put into it.

If, therefore, an algorithm is designed to show "video game religions will have conflict if these conditions hold", then it will show that video game religions will have conflict if those conditions hold. Whether this idea applies to real religions is not a question that can be answered inside the algorithm.

The folks at Science News might not grasp this point. They say "AI systems shed light on root cause of religious conflict: Humanity is not naturally violent."

Artificial intelligence can help us to better understand the causes of religious violence and to potentially control it, according to a new Oxford University collaboration. The study is one of the first to be published that uses psychologically realistic AI — as opposed to machine learning….

The study is built around the question of whether people are naturally violent, or if factors such as religion can cause xenophobic tension and anxiety between different groups, that may or may not lead to violence?

A true psychologically realistic AI would be able to simulate, in a causal sense, the real intellects, wills, memories, and so on of human beings. Can the algorithm designed by Shults do that? No, sir, it cannot.

What it does do instead, is this:

At every time step, the model environment produces hazards that may be of four different types: natural hazards (e.g., earthquake or volcano), predation hazards (e.g., prowling predatory animal), social hazards (e.g., cultural other interpreted as a threat), and/or contagion hazards (e.g., out-group member with apparent contagious disease). The first two of these hazards have to do with nature, broadly speaking, while the latter two hazards are related to other human beings encountered in society.

These are not real hazards, you understand, but weights in a portion of the video game that are labeled hazards. There are no real people reacting to real hazards, even in a simulated sense. There are only equations interacting with inputs from other equations. There is nothing psychological about it.

The SD people don't appear to grasp this and actually believe the authors were able to code "how humans process information against their own personal experiences."

Yet even the authors understand that they still have to prove whether their model works. "Concerns about external validity arise when the results of the model cannot be generalized. The results of our current model cannot be generalized to explain specific occurrences of (or to forecast) mutually escalating xenophobic anxiety.

They still want to believe,though: "However, this does not mean that the model bears no relationship to the real world."

We're going to see more of this sort of thing. People believe that since smart people built a model on a computer that therefore the Artificial Intelligence model must be good, true, useful, and worthy. This blind faith is, of course, a form of scientism.



from Climate Change Skeptic Blogs via hj on Inoreader https://ift.tt/2TgECdr

No comments:

Post a Comment

Collaboration request

Hi there How would you like to earn a 35% commission for each sale for life by selling SEO services Every website owner requires the ...