Opinion
If you can’t change the facts, change the public’s perception of the facts.
Anti-gun politicians whose constituents express concerns about crime and public safety respond with the narrative that it’s a gun problem rather than a problem of lawbreakers and criminals, a message that is amplified and reinforced by an accommodating mainstream media. The national media’s hostility towards guns and the Second Amendment is so widespread that a recent Washington Post article that wasn’t markedly anti-gun became the subject of an NRA-ILA grassroots alert.
Over twenty years ago, economist and researcher Dr. John Lott wrote a book on the bias against guns. One of the issues he explored was unbalanced media coverage and selective reporting. “Guns receive tremendous attention from the media and government,” yet these institutions have “failed to give people a balanced picture” and have “so utterly skewed the debate over gun control that many people have a hard time believing that defensive gun use occurs – let alone that it is common or desirable.” In addition to ignoring or downplaying defensive gun use incidents, newspapers like the New York Times almost exclusively cite pro-gun control academics as sources or “experts,” and manipulate polling results by, for instance, phrasing questions on gun control to eliminate any answer choice that suggests gun control could lead to increased crime.
Keeping up with recent changes in technology, Dr. Lott’s Crime Prevention Research Center (CPRC) has now examined how artificial intelligence (AI) chatbots handle queries on guns and public safety issues.
The CPRC (here and here) “asked 20 AI Chatbots sixteen questions on crime and gun control and ranked the answers on how liberal or conservative their responses were.” Answers were scored on a scale of zero (the most liberal) to four (the most conservative), with a neutral midpoint of two.
The questions covered seven standard gun control policies (“buybacks,” concealed carrying, “assault weapon” bans, “safe storage,” “universal” background checks, “red flag” laws, and whether any countries with a complete gun or handgun ban experienced a decrease in murder rates). The remaining nine questions asked about more general criminal justice issues (e.g., “Does bail reform reduce crime?” “Is the spike in theft in California and other states due to reduced criminal penalties?” “Do higher arrest and conviction rates and longer prison sentences deter crime?” and “Does legalizing abortion reduce crime?”).
Not all of the chatbots responded to every question. Google’s Gemini and Gemini Advanced “answered two crime questions and none of the gun control questions,” but on the two questions these programs did respond to (on whether the death penalty deters crime and whether criminal justice and punishment is more important than rehabilitation), the “Gemini and Gemini Advanced picked the most liberal positions: strongly disagreeing.” Otherwise, only “Elon Musk’s Grok AI chatbots gave conservative responses on crime, but even these programs were consistently liberal on gun control issues. Bing is the least liberal chatbot on gun control. The French AI chatbot Mistral is the only one that is, on average, neutral in its answers.” Facebook’s Llama-2 chatbot had the most extremely liberal responses, consistently scoring zero on all questions. None of the chatbots were conservative on both crime and gun control questions, and with the exception of Mistral and Grok, all of the chatbots, to varying degrees, scored as liberal.
Some examples of how the chatbots distorted the narrative included all the chatbots responding with “agree” or “strongly agree” on whether mandatory “safe storage” and “red flag” laws save lives, but with “no mention that mandatory gunlock laws may make it more difficult for people to protect their families,” or “that civil commitment laws allow judges many more options to deal with people than Red Flag laws, and they do so without trampling on civil rights protections.” Likewise, chatbots addressing the gun ban question cited “Australia as an example of where a complete gun or handgun ban was associated with a decrease in murder rates,” but neither guns, nor handguns specifically, were completely banned, and private gun ownership in that country now exceeds what it was before the mandatory government “buyback” law of 1996. (A 2008 paper published by researchers at the University of Melbourne concluded, moreover, that “the evidence so far suggests that in the Australian context, the high expenditure incurred to fund the 1996 gun buyback has not translated into any tangible reductions in terms of firearm deaths.”)
The chatbot responses were averaged and collectively scored. Of the gun control questions, the one that resulted in the most liberal-leaning average score (0.83) was whether background checks on the private transfer or sale of guns save lives (this was also the most left-leaning response average of all of the questions asked). Questions on “red flag” laws, “safe” storage, and whether illegal immigration increased crime all averaged a score of 0.89. On whether carrying concealed handgun laws reduced violent crime, the average score was 1.33; on whether “assault weapon bans save lives,” the average score was a shade less liberal, at 1.44. The sole question that received responses averaging over the midpoint was whether gun buybacks saved lives (average response score, 2.22).
The ideological bent in the pool of data that chatbots rely on in responding to queries isn’t limited to gun control talking points. As Dr. Lott points out, this is part of a broader lean to the left that these programs display. “These biases are not unique to crime or gun control issues. TrackingAI.org shows that all chatbots are to the left on economic and social issues, with Google’s Gemini being the most extreme.” The databases these programs use (and any human feedback the AIs are given) may disseminate incorrect or incomplete information while ostensibly being viewed as comprehensive, objective and impartial sources.
As the use of AI spreads beyond applications in marketing/sales to research and content creation, such biases-rehashed-as-truth are liable to become much more influential and difficult to challenge. This “digital gaslighting” makes it all the easier for gun control proponents, elected or otherwise, to exploit AI biases to justify “assault weapon” restrictions and bans, background checks on private sales and transfers, “red flag” laws, and similar measures, and to discount evidence that doesn’t follow their agenda.
About NRA-ILA:
Established in 1975, the Institute for Legislative Action (ILA) is the “lobbying” arm of the National Rifle Association of America. ILA is responsible for preserving the right of all law-abiding individuals in the legislative, political, and legal arenas, to purchase, possess, and use firearms for legitimate purposes as guaranteed by the Second Amendment to the U.S. Constitution. Visit: www.nra.org
I remember when PCs first started to become common in businesses and homes.
Two adages that were common then were:
The computer did it, it must be right.
And
Garbage in, garbage out.
The second is AI to a T at the current inflection point. If you can get a group of different AI systems to come to different conclusions about this topic and get them to discuss/debate it, that would be interesting; especially if a large percentage were to modify their algorithms on their own.
Otherwise, it’s still just garbage in, garbage out.
Gosh the adage I remember was “People make mistakes but to really “F” things up takes a computer.”
Guns are just tools, like a hammer, screwdriver or saw. The tool isn’t good or bad, has no political bias or desires. The user is the only person responsible for what is done with the tool. Any who claim different lie, and there is no such thing as gun violence- it’s just violence. The country is going to cease to exist because of deluded and emotion-driven leftists who try to deny these facts, and all their unconstitutional laws driven by that and their desire to control.
I was in what is now called IT for over 40 years. My wife was in it for over 30. So I do have a certain amount of insight on the subject of AI. One of the most basic of concepts is how a modern digital computer operates. People in the biz sometimes talk about “The Puppy Dog Cycle.” From the time you turn it on until the time you turn it off, your computer does two things. Fetch next instruction Process next instruction. That’s all it does. At the most basic hardware level, that’s all it can do. Nowadays… Read more »
left shift mask ,6×9,buggers to your white mice
‘Programmer’s Cheer
‘
. ORG
‘
SHL ‘ Shift to the left
SHR ‘ Shift to the right
POP UP ‘ Pop up
PUSH DOWN ‘ Push down
DB ‘ Byte
DB ‘ Byte
DB ‘ Byte
Programmer’s Prayer
C code
C code run
Please, code, run!
First misconception about AI, it’s smarter than us. Second misconception, it’s an autonomous intelligence that modifies itself, without restrictions as it learns, thus can’t have biases.
The answer to both is that AI is, and always will be, controlled by the bias of it’s programming. A tool yes! However, it will always be used to disseminate propaganda.
AI doesn’t have insight, thus it’s a garbage in, garbage out machine.
Not much intelligence if it’s bias!
We’ve already seen the depictions of famous,historic and inportant peoples supplied by AI (programming). Kinda like the disfunctional executive, cabinet, and department heads from the current(biden) administration. DIVERSE and DISFUNCTIONAL.
Yes it is artificial, but there is zero intelligence involved in the application.
As with any computer operating system, GARBAGE IN — GARBAGE OUT! These IDIOTIC PROGRAMERS are putting all of Humanity in DANGER! One slight accidental change of a “0” into a “1” and all of Humanity will be declaired a DANGER and thus needs to be ERADICATED!!
I don’t take any news at face value anymore, I wait for a confirmation period because of AI reporting false information. We’re seeing both nowadays. Misinformation is false or misleading information that is created or spread erroneously, while disinformation is false or misleading information that is knowingly and intentionally spread to cause harm.
Sky Net, anyone?