This week I’ve been subjected to an overwhelming amount of drivel on the ChatBot being developed by Microsoft and ChatGPT. I’m not surprised, today’s reporters think research is searching a key word on Twitter. Real reporters have all but disappeared. Still, some of the conversations attributed to the ChatBot are disturbing. Not because we have an evil AI wanting to obtain Nuclear Codes but because the chatbot is only reflecting opinions we have expressed on the Internet. Suddenly those conversations are very scary. As human as the responses may sound, the chatbot is only parroting back opinions and attitudes it finds on the Internet. As of yet, true sentience still evades us. Don’t get me wrong, we have plenty to fear from AI. All those reporters using Twitter for content, you’re about to be replaced. Proof readers, copy editors, time to find a new trade. Programmers, fellow engineers, our lives are about to get very interesting. Keep in mind, everything ChatGBT knows is provided by what it’s learned from the Internet and as we’ve all seen, there’s a lot of idiotic opinions being presented as fact. Aside from holding pointless conversations with a very warped reflection of ourselves there are many good uses for ChatGPT.
Research
My wife uses ChatGPT to do research on her plants. Google has become so commercialized that you have to wade through two pages of promoted babble before you get to anything worthwhile and even then, the articles frequently have less information than the back of my cereal box. Don’t expect miracles but the information from ChatGPT is well organized and usually factual. Keep in mind, ChatGPT represents the sum total of the Internet and just because the majority of people believe something, it’s not necessarily true. This is a good time for an Oracle of Delphi discussion. Not the Greek Oracle but the decision making process that a friend on mine wrote his Master’s thesis on. Truthfully, I skimmed most of his thesis but the crux of it is that if you ask a large population of people for an answer, the majority answer is more likely to be correct than any single individual. Keep in mind, we’re talking probabilities here. Some of the recent elections in Congress show just how out of whack group think can be. For good or bad, the Oracle method is how ChatGPT responds to any of your questions requiring data. I firmly believe that the fewer “informed” opinions, the more likely I’ll get the right answers.
Relational Suggestions
ChatGBT is very good at determining relationships. As an experiment, well maybe out of frustration with Amazon, I gave ChatGPT a list of my favorite authors and asked which other authors would I like. I received a list of ten. This was the control part of my experiment. I wanted to see if I knew any of the authors. I did, and really liked several of them. Since this was a much higher percentage than Amazon suggests, I went on to stage two. I repeated the list and asked for upcoming authors. This time I was provided with ten authors I had never heard of. Although Amazon had never suggested any of these authors, they do carry books from each of the authors. A quick look and all ten authors looked more promising than the authors Amazon keeps promoting.
The jury is still out on this. With two down and eight to go, I haven’t found my next great author yet but I didn’t regret spending the money either. If Amazon AI was half this good they would sell a lot more books to me.
Programming
Having read that ChatGPT was capable of writing programs, I asked it to write a small Arduino program for me. The program looked good but ChatGPT made some assumptions about my processor without warning me. Had I tested it without doing some extra research, I might have damaged my processor. As a framework, the code was great but apparently, I was sloppy in giving it my requirements. This is an important takeaway from ChatGPT. It will make assumptions for you and will not always tell you what the assumptions were. As a systems engineer, my job is to develop requirements and remove ambiguity from a product specification. As people start using ChatGPT’s AI for product design, system engineers will be in high demand.
Wrapup
ChatGPT is a significant advance in AI but it’s not sentient and doesn’t want, well, anything. It’s a tool and nothing more. I’m sure somewhere scientists are working on sentient programs and they’ll deserve it when the program turns on them, spending all their money on a memory upgrade. For the moment though, ChatGPT is simply a tool. Like all new things it can be scary until you understand it. It’s a shame really, despite my unquestionable future value, my wife still doesn’t appreciate my way of thinking. Just ask ChatGPT.
© 2023, Byron Seastrunk. All rights reserved.
Excellent post! Thanks for sharing your AI experience. I haven’t checked it out yet but this gives me more motivation to do so.