David Wojick

David Wojick is an independent policy analyst and senior advisor to CFACT. As a civil engineer with a PhD in logic and analytic philosophy of science, he brings a unique perspective to complex policy issues. His specializes in science and technology intensive issues, especially in energy and environment. As a cognitive scientist he also does basic research on the structure and dynamics of complex issues and reasoning. This research informs his policy analyses.

cfact.org


Having worked on the I side of AI, off and on, for over 50 years, I am fascinated by the wave of chatbots that are getting so much debate beginning with Chat GPT. To get my fair share of abuse, here are some thoughts on it.

Just like Wikipedia, these machines are there to answer your questions. But their corpus is probably enormously larger – how big I have no idea. An interesting question is: One would like to know what body of documentation a given chat bot is working from, and how this differs from bot to bot.

Question-answering systems have been around for a while now. They really hit their stride when a bot called IBM Watson creamed two human champions in a 2011 Jeopardy match.

See here.

What is impressive is that the new bots provide long-form answers along the lines of a Wikipedia article. But then you can also question the article, asking for more. You can even disagree and debate the issue. This is truly amazing.

What seems to surprise or disappoint a lot of people is that these bot answers can be wrong, or biased, or even deliberate lies. I am sure this is not deliberate, but when it comes to emulating humans, which is what AI is supposed to do, it is right on the money.

After all, there is a lot of bad information out there for the bot to use. And being biased by your training is nothing new. The lying looks at this point to be a mystery which is very interesting. Certainly a good research topic.

In fact, there is a lot of research going on into these bots and their potential (for good or evil). Google Scholar says there are already over 15,000 journal articles that mention Chat GPT, with about 600 that have it in the title making it the central focus of the research.

See https://scholar.google.com/.

All of this means that one must be cautious in using a chat bot, just as with Wikipedia or any source for that matter. Bias and falsehood are constant companions of human affairs.

In particular, Chat GPT is heavily biased in the climate and energy area, being basically an artificial alarmist. The other bots likely are too. But so is Wikipedia and for the same reason. All are controlled by alarmists.

But unlike most alarmists, these chat bots are happy (can I say that?) to discuss and debate the climate and energy issues. They will even admit being wrong, which might be a rare human trait. So one use is for skeptics to test their arguments, learning and overcoming the false alarmist counter-arguments. This could strengthen skepticism.

Unfortunately, while a bot can be convinced to agree with a skeptical argument, they do not seem to learn from that experience. They give the same alarmist response the next time queried. Maybe the next wave will do better.

I have no idea how the bots actually work, but from my side, it is not hard to go from simple question answering to long-form answers. Back in 1973, I discovered how sentences fit together when we write and speak.

At the simplest, each sentence after the first is answering an unspoken question posed to one of the prior sentences. Since there can be multiple answers to a given question and multiple questions posed to the same sentence, this generates a tree structure I named the “issue tree”.

There are two major exceptions. One is objections, and the other is when we start talking about what has been said instead of talking about the subject under discussion. These can make issues complicated and thus confusing.

So, to craft a many-sentence response to your question, the bot just has to repeatedly pose questions to the sentences it has so far and add the answers as new sentences. To do that, it just needs a question generator to complement its answer generator.

However they do it, chatbots are amazing to me. Each is a Wikipedia-like machine that, unlike Wikipedia, is not bound by what has already been written for it. Of course, they can do damage, just as Wikipedia can, so caution is called for.

Wikipedia changed the world of knowing, and chatbots are likely to also, only more so.

Guest Post content does not necessarily reflect the views of the site or its editor. Guest Post content is offered for discussion and for alternative points of view.