Will AI push aside the computer mouse?

27 / 06 / 2023

AI has been 70 years in the making. Finally, we may see a change in how we interact with computers.

Will AI push aside the computer mouse

I’ve been playing around with ChatGPT ever since it saw the eyes of the public, but the focus of this post is the next iteration in the form of Bing’s new chat feature, Google’s integration of AI into Workspaces, Adobe following suit, and many other potential future applications. But for now, let’s stay with large language models…

I think this is a PARC moment.

To explain what I mean by that and why the computer mouse is at the center of it, we must do a little detour. This story takes us back to the late 1960s when Douglas Engelbart developed the first computer mouse and gave the mother of all demos. Shortly thereafter, in the early 1970s, we see the first proper application of the computer mouse which was shown, in conjunction with the first graphical user interface, at Xerox' Palo Alto research centre (also known as PARC).

PARC experimented with many technologies, but it was a certain Steve Jobs that saw the Alto in action and took away three things: the UI with the computer mouse input, networking, and object-oriented programming. It was the UI and the computer mouse that captured his whole attention. He knew instantly that this was a form of democratisation of computing and would be the way we interact with computers from now on. This insight led him to start the development of the Lisa computer and later the Macintosh.

So, the PARC moment is when our interaction with computers changed. We went from typing cryptic commands to using a computer mouse or point and click, and later touch, to execute commands.

I suspect we are at the cusp of another PARC moment, but this time we’re using natural language to interface with computers - which could be typed, spoken, or maybe even “thought”. In other words, we’re entering the era of Star Trek like computer interaction.

How have I reached this conclusion? It’s simple: I’ve experienced a small (for now) gain in productivity. Specifically, I’ve used Bing’s new chat feature to create shell scripts, code fragments, and introduction text. And while it’s never perfect, it’s good enough (or in a form where you can apply a couple of tweaks for it to be useful enough). But it’s not just a time saver, it’s also a learning device. Bing chat showed me different ways to do things, and it's kind of fascinating.

And this is just the first glimpse of a whole new world opening before my eyes. Consider how limiting the Alto, the Lisa and later the first Macintosh were. It took several iterations of the Macintosh, especially the Macintosh II, to make a dent in the market and to have found the killer-app (desktop publishing). I see this similarly with the current Bing Chat - this is still in early phases (leveraging the also still early GPT4) but one can only wonder what will happen in the next few years? Will it be able to write whole applications? Or movie scripts? Help in strategy sessions and create the perfect solution? When we start thinking about these possibilities, it’s understandable that many see their jobs in uncertain terms.

But to get there, a lot more needs to happen. The new Bing Chat is only as good as the data it’s being fed and, arguably, a lot of this data comes from the Internet. in other words: garbage-in leads to garbage-out. It will be a mammoth task for academia, big tech, and everyone else that uses large language models to clean up the data, block unworthy data sources, and reduce bias at the same time. And even that doesn’t yet address the problem of hallucinations (a term that describes how sometimes the models create untruths) which needs to get resolved for any of these models to become trustworthy. And copyright and governance challenges will ned to be resolved as well. But once we have accomplished all that, I have no doubt the world will need to be resolved.


In an interview with Geoffrey Hinton (the alleged godfather of deep learning), he suggests that we’re in a paradigm shift from programming computers to showing computers - which represents a completely different way in how we use computers today in academia. He suggests the mindset in computer science needs to make that shift from programming to showing computers what they need to learn instead. It’s a very interesting observation and it begs the questions when will humans become obsolete? 

It took 70 years for AI to crack the Turing test (named after Alan Turing who created the test to see if computers could become indistinguishable from humans in their answers). It could very well take another 70 years or more to take the next leap. Hinton himself admits that these large language models do not work in the same way the brain does – and they use much more power compared to the brain’s 5 watts. We will have to improve a lot for them to become better than our own brain. So, the question remains: will we have a system as good as our brain in the next 70 years?  

I’ve invented the term “PARC moment” because of the significance of PARC in the development of computer technology that focused on usability and allowed most people to take up the “mouse.” However, it may well stay a singular event or concept, unless I am right and the changes, we have seen in the late 70s are now happening again with AI.  

If you want to discuss or debate, feel free to reach out: Urs.Gubser@worldline.com

Urs Gubser

Head Innovation Merchant Services
Urs started his professional live as a software engineer in securities trading. Originally from Switzerland, he brought his skill and know how to New York and Hong Kong. While on assignment in Hong Kong, he caught the payment bug. He and his family finally moved back to Switzerland in 2015 after 17 years abroad where he joined SIX Payment services as head eCommerce. Today, Urs is focused on customer experience and seamless end to end user journeys. Urs holds an MBA from Manchester University, UK.