What’s a Chat UI then?
I’m old enough to remember green screen terminals. When I started at IBM in the 80s, IBM was one of the few companies to have email (PROFS, it was called, and I can’t quite remember why. ‘O’ stood for office, I think). Anyway, that’s how we used to work back then, and if you wanted a fancy document, you could use a markup language and compile it for print.
(Aside: people like me bore myself. You know, ‘back in my day, a 1KB drive could only fit in an oil tanker,’ etc., etc., etc. So bear with me, please; I’ll get there without dredging up too much obvious nerdy history).
Then along came Windows. Well, along came Apple really, and up popped Windows looking remarkably similar. Initially, we had the same systems in Windows in different colours, which was kind of a ‘so what.’ But then new apps started arriving that exploited the new UI and the interconnectivity of apps within a PC and then across networks and systems.
Things took a sudden turn with the Internet’s arrival in the mid-90s. Initially, websites were simple information display stores, but in the background, email went global, as did information. SaaS happened, as did eCommerce, and UIs that were once simple HTML became much more interactive as browser power increased, and frameworks like React and, lately, Material, became prevalent.
Where am I going with this?
Screens, keyboards, and, latterly, mice have only arisen so we can enter information, manipulate systems, and see the results. They are not necessarily the natural way we communicate; they are more a tech interface that allows us to use our fingers via a keyboard and our eyes via a screen for human-system I/O. This is definitely not how we humans like to communicate and would have had no place before the computer age.
Hollywood loves to show spaceship dashboards with a big bank of switches and flashing lights, but really the future spaceship pilot (probably an AI, as no human will be able to fly something that complicated) will just have a screen that says either:
A: Everything’s cool.
B: We’re all going to die unless you do A, B, C, and X. But not Y and Z. Do those, and we die.
The same goes for airplanes. I’m not convinced that pilot dashboards aren’t just stuck on to make us passengers think that humans really could fly it if things went wrong. Most landings and takeoffs are computer-controlled anyway, so what’s the handsome guy/girl in the fancy uniform really for? (Apologies to all pilots; please don’t go anywhere else.)
Most of us don’t like committing unnatural acts, and given the chance, we will revert to type. Typing, clicking, and windows have only recently arrived, and they are used for I/O because that’s what’s been there in recent times. But is it set to change?
With computers now able to understand what we mean when we talk/chat via Large Language Models such as GPT, we no longer need to be so explicit with our commands and actions. ChatGPT is an example – searching and selecting from results is tedious when compared to asking a question and getting an answer. Moreover, having a conversation about that answer to further understand it is how we humans work. Take these three examples:
1. On a plane, who really cares where they sit apart from window, aisle, toilets, and legroom? The old-style seat map and then click to select is a bit tedious and slightly overwhelming.
2. How long do you spend looking for that seldom-used function in Word or Excel? “How do I insert a tick?” “How do I put the leftmost 3 characters of column B in the middle of Column F?”
3. In your CRM, those searches and reports you set up need some reading and interpretation to get the result you want – “what sells the most and costs the least to sell.”
UI-wise, not much has changed since Windows. But the UI is on the march again. Chat interfaces are on the rise.
In those three examples above, “give me the best legroom seat for under 20 bucks,” “Put the leftmost 3 characters of column B in the middle of Column F,” and “what sells the most and costs the least to sell” seem much better ways of getting things done.
One of our clients recently demoed their new SaaS to a global enterprise prospect with our nascent chat UI in it. The client went mad for it and basically ordered everything on the proviso that they made the chat UI the main interface.
Why is this?
A Chat UI (CUI) is easy to learn, provides only the requisite information, and allows a conversation (chat or voice) with the system. It mimics the way we communicate rather than forcing a new one on us. It supports voice and is multilingual. And it can learn and tailor itself to the user.
A CUI allows us to treat an app as a human but with a vast knowledge and 24/7 access, allowing much greater productivity.
Will keyboards, mice, and screens die? Not in a hurry, I think, as mass data entry, moving stuff around screens, as well as displaying the results on that screen, looks set to remain a thing but perhaps they will decline. What we might now see, though, is a new wave of apps that cut to the chase with a chat UI and perhaps with wearable output.
Look out in the future for annoying mobile texters standing stationary in the middle of doorways being replaced by people walking around muttering to themselves and bumping into lampposts.
Our platform, CaptivateChat, allows developers to inexpensively build a Chat UI into their app over any channel using any bot. Head over here to find out more…