The cool part of this time of year, the holiday period, is that it prompts you to look backwards and look forwards. To be honest, writing articles here on the blog prompts me to do that throughout the year too, but there does tend to be a bit more downtime for reflection around the boundary of one year closing and another just starting afresh.
ChatGPT’s latest iteration also caused a stir late last year and arguably re-ignited more discussions about AI (I’m guilty as charged as shown here in this article). It’s not so much that it’s an impressive step, but a reminder of ideas I have for where OSS user interfaces might go…..
Or perhaps user interfaces (UIs) is not even quite the right term to use for OSS of the future. It’s less about the GUIs (Graphical User Interfaces) and more about the way we’ll interact with machines (MUIs or Machine to User Interactions).
I feel that the combination of AI and AR (Augmented Reality) will massively reshape the way OSS will work. Not necessarily the core workflows, but certainly how we interact with them.
There are two beliefs that I feel are inevitable:
- We’ll increasingly act with AI in co-pilot mode. We’ll act alongside AI tools, which will surface information for us to act upon when our involvement is needed. They’ll also go and do the things we ask them to do (much like using natural language to ask ChatGPT to collect or create material for us). Query and decision support becomes more readily available
- We’ll engage with all of that information more spatially than we do today – either in virtualised 3D models / worlds (VR) or augmented versions of real worlds (AR). Query results and decision support becomes more readily / immersively consumable
Most current OSS are sadly lacking in usability and intuition quotients. We’ve barely tackled, let alone conquered the GUI / UX challenge. I’m proposing that we don’t even bother now. We should completely bypass it and go straight to MUI / UX design.
There are some really amazing mindset changes with the possibilities that MUIs offer, including:
- We don’t need screens full of menus and buttons. They become irrelevant with co-pilots and immersive realities
- Being able to interact with data in 3 dimensions (or more if you consider other ways to bring information in/out of awareness) provides more possibilities than working in only 2 planes of visualisation like on our monitors currently
- We do need improved natural language querying, where ancestors of ChatGPT will gather the info we ask for (or raise important info to our attention)
- As ChatGPT, DALL-E, Midjourney, etc have shown, AI is pretty good at making a first guess at what you’re asking for, but invariably needs humans to validate and iterate a few times before it provides what we actually want. This is quite different to most current-day OSS where you click a button and know exactly what the response / template will look like. A fixed set of insights are already baked in
- Rightly or wrongly (most probably wrongly), I like to think of current AI responders like Infinite Monkey Theorem, but with a ranking system to better guide it towards plausible results, where humans are in the loop (for now) to decide what’s plausible and what isn’t
- As Ben Evans articulated here, “Instead of people trying to write rules for the machine to apply to data, we give the data and the answers to the machine and it calculates the rules. This works tremendously well, and generalises far beyond images, but comes with the inherent limitation that such systems have no structural understanding of the question – they don’t necessarily have any concept of eyes or legs, let alone ‘cats’.”
- AI relies on lots of data and human plausibility steering to get towards reliability. That means it’s mostly re-hashing existing data towards an outcome that’s already known (more or less). But in my mind, this seems to present an asymptote.
Just like today, anyone can repeat / rehash, but we need deep doing skills (specificity) and creativity to avoid obsolescence (in OSS or anything else for that matter)
To go beyond the asymptote requires new thinking. This is easily demonstrated, I’ve tried to use various AIs tools to write some of my blogs. They’ve been completely useless at writing anything new or creative. I haven’t been able to use any of the content they’ve attempted to write. They’ve been good at summarising known ideas about OSS though, which isn’t much use for this blogroll 🙂
- Since anyone will be able to rehash and quickly reach the asymptote of known information, I wonder if we’ll get back to the golden years of primary research as a means of differentiating? This is what I’m hoping for at least!!
- In terms of marketing of OSS, I suspect Google will get good at spotting AI content because these AI engines will be Google’s biggest threat. (Most people want answers, not Google’s list of pages to review… ChatGPT bypasses ads… But also bypasses the pages and content we use to market ourselves today… unless the AI engines decide to inject advertising in the future)
- All of this means we must ideate and network to stand apart rather than just do more of the same content marketing and product design that everyone else is doing!!