I’ve been seeing a lot of examples recently of an AI chatbot that answers your questions with lengthy replies, vs Google which pulls up the most relevant webpages. Many people feel this is the future of search.
It may be. And a new future for search is not unwelcome. Google has gotten progressively less useful and more paternalistic in recent years, so more options sound great.
When I see the responses from this bot, it makes me wonder what the reference material is, and who writes the code that determines how this information is treated. Results may be faster, and presented in a easier to digest plain English (or at least the uncanny valley AI attempt at it), but is it better or worse when it comes to propaganda, censorship, and social engineering through exclusion and framing?
It seems a world with a single robot-delivered result is more fraught with potential abuse than a world with a bunch of (mostly) human created results a human has to sort through. Both can be manipulated, both require human coders to set parameters, but the messier, less precise results of the latter seems to at least provide some opportunity for unlikely or even unwanted findings.
As the internet gets eaten by AI generated content, perhaps the distinction between these two types of search blurs anyway. I’m not opposed to any of this, but I have a heightened sense of awareness that the internet is not the free-flowing place it once seemed to be.
All of these tools can be useful and wonderful if they are understood for what they are and not mistaken for something else. Treating an AI search tool as the source of truth would be unwise. Treating it as an interesting way to get one presentation of ideas or facts could be useful.