The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

AI and ML Sturm und Drang

Share:
I recently wrote up some thoughts on the current hype around ML and AI. I sent it to the Risks Digest. Peter Neumann (the moderator) published a much-abbreviated version. This is the complete set of comments.


There is a massive miasma of hype and misinformation around topics related to AI, ML, and chat programs and how they might be used…or misused. I remember previous hype cycles around 5th-generation systems, robotics, and automatic language translation (as examples). The enthusiasm each time resulted in some advancements that weren’t as profound as predicted. That enthusiasm faded as limitations became apparent and new bright, shiny technologies appeared to be chased.

The current hype seems even more frantic for several reasons, not least of which is that there are many more potential market opportunities for recent developments. Perhaps the entities that see new AI systems as a way to reduce expenses by cutting headcount and replacing people with AI are one of the biggest drivers causing both enthusiasm and concern (see, for example, this article). That was a driver of the robotics craze some years back, too. The current cycle has already had an impact on some creative media, including being an issue of contention in the media writers' strike in the US. It also is raising serious questions in academia, politics, and the military.

There’s also the usual hype cycle FOMO (fear of missing out) and the urge to be among the early adopters, as well as those speculating about the most severe forms of misuse. That has led to all sorts of predictions of outlandish capabilities and dire doom scenarios — neither of which is likely wholly accurate. AI, generally, is still a developing field and will produce some real benefits over time. The limitations of today's systems may or may not be present in future systems. However, there are many caveats about the systems we have now and those that may be available soon that justify genuine concern.

First, LLMs such as ChatGPT, Bard, et al. are not really "intelligent." They are a form of statistical inference based on a massive ingest of data. That is why LLMs "hallucinate" -- they produce output that matches their statistical model, possibly with some limited policy shaping. They are not applying any form of "reasoning," as we define it. As noted in a footnote in my recent book,
Philosophically, we are not fond of the terms 'artificial intelligence' and 'machine learning,' either. Scholars do not have a good definition of intelligence and do not understand consciousness and learning. The terms have caught on as a shorthand for 'Developing algorithms and systems enhanced by repeated exposure to inputs to operate in a manner suggesting directed selection.' We fully admit that some systems seem brighter than, say, certain current members of Congress, but we would not label either as intelligent.
I recommend reading this and this for some other views on this topic. (And, of course, buy and read at least one copy of Cybermyths and Misconceptions. grin

Depending on the data used to build their models, LLMs and other ML systems may contain biases and produce outright falsehoods. There are many examples of this issue, which is not new: bias in chatbots (e.g., Microsoft Tay turning racist, bias in court sentencing recommendation systems, and bias in facial recognition systems such as discussed in the movie Coded Bias ). More recently, there have been reports showing racial, religious, and gender biases in versions of ChatGPT (as example, this story). “Hallucinations” of non-existent facts in chatbot output are well-known. Beyond biases and errors in chats, one can also find all sorts of horror stories about autonomous vehicles, including several resulting in deaths and serious injuries because they aren’t comprehensive enough for their uses.

These limitations are based on how the systems are trained. However, it is also possible to "poison" these systems on purpose by feeding them bad information or triggering the recall of biased information. This is an area of burgeoning study, especially within the security community. Given that encoded systems derived in these large ML models cannot be easily reversed to understand precisely what causes certain decisions to be made (often referred to as "explainable AI"), there are significant concerns about inserting these systems in critical paths.

Second, these systems are not accountable in current practice and law. If a machine learning system (I'll use that term but cf my 2nd paragraph) comes up with an action that results in harm, we do not have a clear path of accountability/responsibility. For instance, who should be held at fault if an autonomous vehicle were to run down a child? It is not an "accident" in the sense that it could not be anticipated. Do we assign responsibility to the owner of the vehicle? The programmers? The testers? The stockholders of the vendor? We cannot say that "no one" is responsible because that leaves us without recourse to force a fix of any underlying problems, of potential recompense to the victims, and to general awareness for the public. Suppose we use such systems safety or correctness-critical systems (and I would put voting, healthcare, law enforcement, and finance as exemplars). In that case, it will be tempting for parties to say, "The computer did it," rather than assign actual accountability. That is obviously unacceptable: We should not allow that to occur. The price of progress should not be to absolve everyone of poor decisions (or bad faith). So who do we blame?

Third, the inability of much of the general public to understand the limitations of current systems means that any use may introduce a bias into how people make their own decisions and choices. This could be random, or it could be manipulated; either way, it is dangerous. It could be anything from gentle marketing via recency effects and priming all the way to Newspeak and propaganda. The further towards propaganda we go, the worse the outcome may be. Who draws the line, and where is it drawn?

One argument is, "If you train humans on rampant misinformation, they would be completely biased as well, so how is this different?" Well, yes -- we see that regularly, which is why we have problems with Q-anon, vaccine deniers, and sovereign citizens (among other problem groups). They are social hazards that endanger all of us. We should seek ways to reduce misinformation rather than increase it. The propaganda that is out there now is only likely to get worse when chatbots and LLMs are put to work, producing biased and false information. This has already been seen (e.g., this story about deepfakes), and there is considerable concern about the harm this can bring. Democracy is intended to work best when the voters have access to accurate information. The rising use of these new generative AI systems is already raising the specter of more propaganda, including deep-fake videos.

Another problem with some generative systems (artwork, generating novels, programming) is that they are trained on information that might have restrictions, such as copyright. This raises some important questions about ownership, creativity, and our whole notion of issues of rule of law; the problems of correctness and accountability remain. There is some merit to the claim that systems trained on (for example) art by human artists may be copying some of that art in an unauthorized manner. That may seem silly to some technologists, but we’ve seen lawsuits successfully executed against music composers alleged to have heard a copyrighted tune at some point in the past. The point is the law (and perhaps more importantly, what is fair) is not yet conclusively decided in this realm.

And what of leakage? We’re already seeing cases where some LLM systems are ingesting the questions and materials people give them to generate output. This has resulted in sensitive and trade secret materials being taken into these databases…and possibly discoverable by others with the right prompting (e.g., this incident at Samsung). What of classified material? Law enforcement sensitive material? Material protected by health privacy laws? What happens for models that are used internationally when the laws are not uniform? Imagine the first “Right to be forgotten” lawsuits against data in LLMs. There are many questions yet to be decided, and it would be folly to assume that computing technologists have thoroughly explored these issues and designed around them.

As I wrote at the beginning, there are potential good uses for some of these systems, and what they are now is different from what they will be in, for example, a decade. However, the underlying problem is what I have been calling "The Trek futurists" -- they see all technology being used wisely to lead us to a future roughly like in Star Trek. However, humanity is full of venal, greedy, and sociopathic individuals who are more likely to use technology to lead us to a "Blade Runner" future ... or worse. And that is not considering the errors, misunderstandings, and limitations surrounding the technology (and known to RISKS readers). If we continue to focus on what the technology might enable instead of the reality of how it will be (mis)used, we are in for some tough times. One of the more recent examples of this general lack of technical foresight is cryptocurrencies. They were touted as leading to a more democratic and decentralized economy. However, some of the highest volumes of uses to date are money laundering, illicit marketplaces (narcotics, weapons, human trafficking, etc.), ransomware payments, financial fraud, and damage to the environment. What valid uses of cryptocurrency there might be (if there are any) seem heavily outweighed by the antisocial uses.

We should not dismiss, out of hand, warnings about new technologies and accuse those advocating caution as “Luddites.” Indeed, there are risks to not developing new technologies. However, the more significant risk may be assuming that only the well-intentioned will use them.

Comments

Posted by John DiMarco
on Tuesday, June 6, 2023 at 09:02 AM

I agree. In my view, AI is an amplifier. But what it amplifies depends on who is using it, for what. As you suggest, it is naively optimistic to assume that people will use it solely for good. Some will use it for not-so-good things. While most people are well-intentioned most of the time, AI, as an amplifier, permits the few to have an atypically large effect, and those few may not be among the well-intentioned majority. What we need around AI is not perhaps so much the virtue of courage, but other social virtues that are perhaps a bit less in fashion, such as prudence and temperance. As a society, it will take some time to figure this out. Hopefully we can manage it.

Posted by David Tilley
on Wednesday, June 7, 2023 at 03:57 PM

The main probem as I see it is the same as with social media and cable news. The lack of critical thinking by the general public.

As you point out, the public and indeed the ancient lawmakers, are incapable of understanding the technology, at least in the near term.

When Orrin Hatch asked Zuckerberg “If you don’t charge your users, how do you make money?” I was as stunned as Zuck. Zuck sat there in total surprise and then said “Advertising?”

Having our lawmakers try to set guidelines seems like asking my 95 year old mother in law to make regulations for 5G.

Posted by Bob Hathaway
on Friday, June 9, 2023 at 01:11 AM

Great article, wrote much the same. Devil’s advocate—Considering today’s fads, media fools/corruption, politicians/statism/propaganda/social engineering, education/indoctrination, corporatist elites and all the rest, are generative ‘deep learning’ neural networks any different or present any new danger past what we are already fed?—Perhaps as long as there are good people such as yourself to spread the truth, we’ll stay on track. Soon multi-paradigm, general AI and then the singularity with artificial superintelligence—I work towards research to explicitly counter the ‘evil’ in the world and towards a benevolent philanthropic technological post-scarcity world—perhaps our only hope since nothing else has worked or holds such promise.

Posted by Christoph Schaefers
on Saturday, June 24, 2023 at 11:15 PM

Great article!
@David Tilley: I share your skepticism on the know-how of some politicians but couldn’t scientists and technology experts advise them? In a democracy, the politicians are the experts to balance the different interests between science, industry, consumers, ...
Imo it is good news that at least some parliaments are taking up the task (even though this legislation has to be improved over time)
https://www.dw.com/en/eu-lawmakers-lay-groundwork-for-historic-ai-regulation/a-65909881

——-
Note from Spaf: Some nonprofit, nonpartisan groups seek to advise policymakers about the technology.  I was chair of one for over a decade: The (now) US Technology Policy Committee of ACM.  The USTPC has recently issued several documents about AI systems’ ethical development.

Leave a comment

Commenting is not available in this section entry.