Will we need to be AI experts to use AI?
I watched all of Foundation on Apple TV+ recently, after reading the first 5 books years ago. Since then, I’ve been meaning to read some more Asimov, starting with the Robot series. Asimov was the first to coin the term “robotics”, by exploring the scientific principles that might underpin futuristic robots. Although the idea of robots – mechanical humans – had appeared before, Asimov got more philosophical with them. His books grapple with the applications of the Three Laws of Robotics – first, that a robot cannot harm a human; second, that a robot must obey a human (if not contradicting the first law); and third, that a robot must protect itself (where such protection doesn’t conflict with the first or second law). The main books in the Robot series are detective murder-mystery stories, which usually involve some aspect of the Laws going awry (for instance, a robot can give a human a gun and take it back, without having actually harmed a human if the gun is used).
The quote that got my attention, though, was this one:
The context this quote is from doesn’t matter too much: Baley, the main (human) character, has asked a robot whether it knows how to contact someone on the planet, to which the robot replies that it does. In this quote, he’s pointing out the futility of asking such a question – a robot will always know how to do something, so really you should be asking the robot directly to do something. To truly be efficient with robots, you have to know the best way to talk to them. How well does the average Earthman, or Solarian, do? Probably average themselves.
This got me thinking about AI tools like ChatGPT. I keep seeing ChatGPT “cheat sheets” that proclaim to tell you the best way to input prompts to get the best possible answers. Not only do more specific questions lead to more specific answers, but the way a question is asked can deliver a better result. It is fairly analogous, in my view, to something like Google searching and library look-ups. I think the art of a Google search is one that should be encouraged more in schools and in children, because we are moving away from the information age of learning how to look stuff up for ourselves and instead relying on other people to help us (and I have seen this in people my own age). There’s an art to a good google search: knowing which websites will yield useful results; knowing when to approach a query slightly differently to get better results; using boolean operators to narrow searches down. The best researchers, I think, are really just the best at using the search tools they have available. The same goes for library searches: at the MSA conference last year, I saw a paper by Caleb Triscari, who was presenting on music metadata in Australian library collections. He said the search inputs from some institutions is terrifying: researchers trying to use library search bars like Google or ChatGPT… “can you tell me where I can find a book on XYZ subject?” instead of a simple search for XYZ subject, or an author, or, almost anything else! I think the same sort of learning is going to be needed for AI tools like ChatGPT. On the other hand, we probably don’t need to use AI – but I think if we are going to be sold the idea that AI is going to be for the betterment of humanity, then I need to see how an AI assistant can really help someone improve their work output. Until we get there, I think it is just going to remain an intriguing idea, but not one actively used by many people. I think it is totally feasible that people who learn how to use AI best are going to be the ones who can achieve more in the future (almost like how someone who learned how to use Google efficiently in the early days could pull ahead massively).
I just hope that, like Asimov, the people working with AI are smart enough to put in safeguards for humanity’s safety.