Many OSS UI (User Interface) design principles have barely changed in decades. These principles have served us well.
But the iterative dialogue model presented by today’s generative AI tools provides a totally new framework for OSS UI reimagination.
Today we’ll explore 18 Agentic Experience (AX) principles that we’ve picked out of a letter shared by Greg Isenberg. We’ll help you to evaluate which ones are relevant and could potentially transform your OSS into a true AI-powered teammate.
As Greg suggests
“There’s something happening to software that most people haven’t noticed yet, but once you see it, you can’t unsee it. We’re reaching the end of interfaces as we know them. I don’t mean interfaces are disappearing. I mean the fundamental relationship between humans and software is changing”
Greg’s article helped bring together many disconnected thoughts I’ve had in this space and added a few new ones to the list (including the concept of AX itself!)
.
Summary of 18 AX Principles
| ID | Key AX Insight | Applies to OSS? | Rationale |
|---|---|---|---|
| 1 | Transactional -> Conversational interfaces | Yes | OSS users benefit from chat-style workflows (e.g. “Why is link X down?”) that let them interrogate the system in natural language |
| 2 | Every path is hard-coded vs System plans its own path | Yes | Instead of hard-coding every rule or every workflow branch, an agentic OSS infers intent, selects appropriate actions and adapts on-the-fly without requiring designers to script line-by-line for every different flow-variant. Business logic no longer needs to be “baked in,” simplifying design |
| 3 | Stateless -> Stateful experiences | Yes | Persisting incident context (related alarms, project workflows, past fixes, topology changes) has the potential to reduce MTTR by avoiding “context amnesia” |
| 4 | Tools -> Teammates | Yes | Treating OSS as an AI-driven partner that learns operator preferences, fostering a collaborative relationship |
| 5 | Isolated tasks vs ongoing projects | Yes | OSS workflows span multi-day projects or even cascading projects. Recognising evolving goals and history is crucial for planning and troubleshooting |
| 6 | AI maturity enables intent inference | Yes | Modern models can infer why an alarm fired (component failure, configuration drift, congestion) and suggest root-cause hypotheses |
| 7 | Cheap storage allows full context retention | Yes | Accessible persistence of logs, tickets and topology snapshots means an OSS can maintain a richer timeline of every entity’s state |
| 8 | Growing user trust in autonomy | Yes | Operations teams accept automated remediation for routine faults, provided the system proves its reliability and transparency |
| 9 | Measure trust, not just clicks | Yes | OSS KPIs have the potential to shift from “tickets closed” to “autonomy ratio” / “recommendation acceptance rate” if appropriate checking mechanisms are implemented |
| 10 | Trust gradient: start transparent, then automate more | Yes | Initial automation proposals include confidence scores and require sign-off (humans in the loop), evolving to unsupervised fixes for known issues |
| 11 | Context-based network effects | Yes | For OSS vendors, an OSS that accumulates knowledge of network-specific quirks becomes more valuable for the client (and therefore harder to replace) than one offering discrete / isolated features |
| 12 | Dynamic (learned) automation over static workflows | Yes | Instead of hard-coded playbooks or rules engines, an agentic OSS observes network situations, operator actions and automates them adaptively |
| 13 | New design patterns for relationship building | Yes | OSS UIs need confidence indicators, inline explanations and correction flows to support a trust-based partnership model |
| 14 | Netflix-style “surprise” recommendations | Possible | OSS users typically need precise, relevant, repeatable insights on network health, not serendipitous content discovery. However, it may be beneficial for the algorithms to surface insights that the operator was never even looking for, such as suggesting network configuration changes that are more resilient |
| 15 | Mood-based personalisation (Spotify DJ) | No?? | Automation in OSS is predictable, driven by deterministic responses to technical statuses, thresholds and SLAs, not in an ad-hoc manner based on an operator’s changing emotional state |
| 16 | Social-driven engagement metrics (likes, shares) | Possible | OSS platforms don’t measure engagement via social interactions. There’s no concept of “liking” an alarm or a device. However, social sentiments could help with scenarios like data integrity confidence. |
| 17 | Consumer community network effects | Possible | Value in OSS comes from operator–agent context accumulation, not from users influencing each other’s experience. However, perhaps there is the potential for the “power of the crowd” influencing decisions, such as via choice of the best knowledge entries |
| 18 | AI “companionship” for leisure tasks | Possible | Operators need rigour and safety first. Framing an OSS agent as a “companion” risks trivialising critical workflows. However, it may have the potential to introduce the concept of an “expert advisor,” especially as a “canary on the shoulder” of new starters |
.
The following sections dive deeper into each row of the summary sections above. TLDR!
1. Transactional -> Conversational Interfaces
Greg notes that “the fundamental relationship between humans and software is changing from transactional to conversational”. In OSS, replacing rigid form-based interactions with natural-language exchanges allows engineers to pose queries like “Which route failed when traffic spiked yesterday?” rather than navigating menus.
This conversational layer speeds diagnostics and reduces cognitive load.
By treating the interface as a dialogue, OSS can summarise context, suggest follow-up questions and even draft remediation steps. Over time, operators learn that their agent “remembers” prior exchanges, fostering adoption and deeper collaboration.
These three videos provide hints into what is possible in next-gen OSS interfaces:
The EnterpriseWeb team has created the video below – with a preamble about what is to be demonstrated and then the demo itself beginning at 11:46 (link direct to demo section of the video).
In this video, Eric Schmidt, former CEO of Google, has spoken about the old UI design acronym WIMP, which stands for Windows, Icons, Menus, and Pull-downs. He emphasises that this model was created around 50 years ago at Xerox PARC and argues that traditional user interfaces based on WIMP are becoming obsolete.
Schmidt predicts that AI will transform user interfaces by allowing natural language interaction and generating interfaces dynamically on the fly, tailored to the user’s intent rather than fixed layouts. He questions why we still have to be stuck in the WIMP paradigm when AI can create more responsive and adaptive UI experiences for solving problems.
The only challenge I see in this is that OSS interactions are often high-volume or high-importance, where speed is of the essence. If humans are involved in these interactions, we want the WIMPs in consistent locations rather than having to spend brain-cycles “finding the buttons!” I’m sure clever AX designers can overcome this small challenge.
.
2. Every Path is Hard-coded vs System Plans Its Own Path
Greg highlights that traditional UX requires a designer to pre-plan every path (ie Flows are hard-coded. If a user veers off the happy path, error states appear).
In an agentic experience, by contrast, the “system plans its own path – it senses, infers, and chooses actions the designer didn’t have to script line-by-line.” Rather than forcing users into rigid, predefined sequences, AX lets the software dynamically chart a course based on context, intent and evolving conditions.
In OSS, this shift means moving away from brittle, rule-based playbooks toward inference-driven remediation. Instead of manually coding every alarm-to-action mapping or ticket workflow, an agentic OSS observes network state, topology changes and operator history to autonomously select the appropriate response.
This helps to reduce the complexity of the “Business / Logic” Layer discussed below.

For example, if a spike in packet loss coincides with a recent firmware update, the system can infer rollback or patch strategies without requiring engineers to anticipate that exact scenario in advance. This allows the OSS to adapt on the fly as the network landscape evolves. Alternatively, if an operator is responsible for managing the transmission network in the northern region, the operator may just ask the UI to flag anything that requires attention (like in the second video above) rather than having to process all information about the region (ie finding the needle in the haystack).
.
3. Stateless -> Stateful Experiences
“Most software today treats every interaction like meeting a stranger. You open an app, tell it what you want, it gives you what you asked for, and then forgets you exist. Every session starts from zero. This made sense when software was simple and computers were dumb, but it’s becoming obviously wrong as AI gets smarter,” Greg writes.
OSS traditionally “starts from zero” on every transaction, such as each alarm, forcing engineers to reconstruct histories from disparate logs. By persisting context, linking awareness of related alarms, past fixes and topology changes, an agentic OSS automatically eliminates this “context amnesia.”
Stateful experiences mean that when a field technician logs in, they immediately see an evolving incident timeline, complete with operator notes and prior corrective actions (either their own or their colleagues’). This continuity accelerates troubleshooting and (hopefully) ensures no critical detail slips through the cracks.
We move from “single-shot” tasks to ongoing goals and iterations (with greater contextual awareness).
.
4. Tools -> Teammates
Greg argues we’re moving “from tools to teammates”. In OSS, this means shifting the mindset from one-off tasks (such as running scripts) to collaborating with an AI partner. Rather than manually executing commands, operators converse with the agent, which proposes actions, explains its reasoning and adapts to feedback.
Treating OSS as a teammate fosters a relationship where the system learns operator preferences. This might include preferred remediation scripts, escalation paths and reporting formats, leading to personalised suggestions and a more efficient workflow.
This even changes the KPIs we use to measure effectiveness. We might no longer think in terms of alarms acknowledged, but in measuring trust/confidence in the tasks that are delegated to algorithms.
.
5. Isolated Tasks vs Ongoing Projects
“Humans don’t think in terms of isolated tasks. We think in terms of ongoing projects, evolving goals, and accumulated context,” Greg observes. OSS workflows often involve multi-day network rollouts, maintenance windows and compliance audits. They often involve tasks that have end-to-end relevance to customers (eg the left side of the workflow below), but only single-task repetitive processing by operators (the right side of the workflow).

Recognising these as continuous endeavours rather than discrete tickets is vital.
An agentic OSS can group related tasks into project buckets, track progress and remind operators of pending dependencies. This project-centric view (left side of the chart above) reduces errors in complex deployments and ensures a unified approach to change management.
.
6. AI Maturity Enables Intent Inference
Greg highlights that “language models can now infer intent, maintain conversation history, and make reasonable decisions based on incomplete information.”
For example, in network operations, intent inference allows the agent to discern why an alarm was raised (eg due to configuration drift, congestion spikes or hardware faults) and suggest targeted root-cause analyses (RCA). But RCA is just one example of the dots we already know to connect. The exciting part of LLMs is their ability to connect dots we’d never thought to connect before. To identify intents that we’d never even considered or patterns of outcomes we’d never aggregated before.
.
7. Cheap Storage Allows Full Context Retention
“When memory and storage were expensive, stateless interactions made economic sense. Now that compute is cheap, there’s no technical reason why software can’t remember everything about how you work,” Greg notes.
OSS data volumes have exploded, yet modern storage is cost-effective enough to retain all logs, tickets and configuration snapshots. In the past, most collected OSS data was never used.
Seth Godin quoted 3 rules of data:
“First, don’t collect data unless it has a non-zero chance of changing your actions.
Second, before you seek to collect data, consider the costs of processing that data.
Third, acknowledge that data collected isn’t always accurate, and consider the costs of acting on data that’s incorrect.”
In relation to Seth’s Data Rule #1, agentic tools are drastically increasing the likelihood of OSS data being useful.
This aligns well with the “connecting seemingly unconnectable dots” that Steve Jobs made in his famous 2005 Stanford Commencement address.
“You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”
OSS tools are great at collecting dots (data points). Too many for human brains to connect. But by archiving this rich context, agentic OSS platforms have the potential to build a detailed operational history for every network element. This archive not only aids current troubleshooting but serves as the basis for predictive maintenance, trend analysis and so much more!!
We can collect many dots (eg vendor documents such as user guides, provisioning guides, API specifications, or standards like Open API specs, or internal materials like workflows) and use agentic means to help suggest how to connect them. “Chat-changing” the network like in the third video above is a great example that avoids having to write detailed interface specification documents and determine all the data mappings required along the way.
.
8. Growing User Trust in Autonomy (trust visualisation)
Greg observes that “people are comfortable with systems that make suggestions, automate routine tasks, and learn from their behaviour. The trust threshold shifted from ‘do exactly what I say’ to ‘help me accomplish what I want.’”
Network engineers will increasingly accept automation, such as automated circuit re-provisioning, but only once the system has proven to be transparent and reliable. Our future UI / UX need to provide “algo transparency” mechanisms to allow us to transition from manual to auto, whilst building confidence / trust that the algorithms are doing the right thing. We talk about this in Dilemma #6 in “12 dilemmas that we face on the journey to zero-touch operations”
Building trust requires starting with low-risk tasks, clearly explaining each action and demonstrating consistent accuracy. As confidence grows, operators will entrust more critical remediations to the agent, boosting efficiency and reducing human error.
.
9. Measure Trust, Not Just Clicks (trust KPIs)
In Greg’s view, “instead of measuring clicks and conversion rates, you measure trust and delegation.” Not only do we need to have “algo transparency” mechanisms building into our UIs, but we also need to focus on new KPIs.
Traditional OSS KPIs such as alarm acknowledgements, ticket closure counts, script execution volumes, etc all reflect activity, not collaboration. Even our traditional “tick and flick” approach to escalations are about collaboration without really collaborating.
An agentic OSS should track metrics such as autonomy ratio (percentage of tasks handled by the agent) and recommendation acceptance rate.
By aligning visibility / performance indicators that empower trust growth, organisations can quantify the evolving partnership between operator and AI, thus demonstrating ROI and justifying greater investment in OSS transformations.
.
10. Trust Gradient: Start Transparent, Then Automate More (trust evolution)
This one is closely linked to the trust discussion in the previous two items. They talk about visualisation and measurement. This one talks about trusted actions.
Greg describes a “trust gradient” where early in the relationship the agent “shows its work extensively” and later “takes initiative on tasks the user hasn’t explicitly requested.”
For OSS, initial deployments should present remediation options with confidence scores and detailed explanations, requiring operator sign-off (human in the loop).
As the system proves its reliability, it can progress to fully automated healing for routine incidents, reducing manual workload while preserving safety for novel or high-impact faults. This is similar to the way we build trust in human operators as they demonstrate increasing capability on their path from apprentice to master. With the increasing trust comes increasing levels of autonomy being given.
For example, this article discusses the cumulative benefits of AIOps, as the algorithms take increasing responsibility for closed-loop assurance. It’s also articulated in the asymptote diagram below.

.
11. Context-Based Network Effects
“The more you use the software, the more valuable it becomes because it understands your patterns,” Greg writes. This wasn’t the case previously, but is now possible as the software learns, as per point #3 above.
OSS platforms that learn peak traffic times, typical failure modes and operator preferences accumulate a contextual moat. Switching to a competitor means losing that accumulated understanding.
OSS solutions are already renowned for their “stickiness” (ie difficulty to replace). However, this is mostly driven by the complexity and risks in OSS change-out projects. The tools themselves are, more-or-less, like-for-like and readily replaceable.
The stickiness factor ramps up significantly though when an OSS accumulates more contextual knowledge of a network operator’s environment (of networks, systems, processes, integrations, etc). A learning OSS becomes far more valuable for the client (and therefore harder to replace) than one offering discrete / isolated features. It’s this derived knowledge, earnt over long periods of time, that make OSS solutions almost impossible to replace.
If I were an OSS product owner, stickiness would be a big focus for how I’d be steering my product roadmap. Not from the nefarious perspective of locking clients into solutions they don’t want, but from the positive perspective of creating a solution that’s immensely context-aware and valuable.
.
12. Dynamic (Learned) Automation Over Static Workflows
Greg points out that “the system learns to automate whatever the user finds tedious or repetitive,” which is the origin of why OSS came into being. Telcos handle high volume, highly repetitive tasks that are better suited to machine processing than human processing.
Rather than relying on static, rule-based playbooks, an agentic OSS amplifies this further. It has the potential to observe operator adjustments, such as ongoing manipulations of configs or operational thresholds, and codify them as adaptive automations when traditional rules would not be granular enough.
This learned automation evolves with the network and its operators, reducing the maintenance burden.
.
13. New Design Patterns for Relationship Building
Greg argues that “designing agentic experiences requires thinking about user interface design completely differently.”
One of the things that I’ve found interesting from working with data science is the move from binary (black or white) decisions to making decisions even when the options are grey or not fully known.
We have the potential to introduce confidence bars into our OSS UIs to visualise shades of grey. This may incorporate examples such as recommendation certainty, breadcrumbs and/or explanations to trace reasoning, confidence levels in data integrity and lightweight correction mechanisms so operators can easily amend agent assumptions (eg reinforced learning such as which knowledge entries give the best fault-fix outcomes in different failure scenarios).
These relationship-centric UI patterns foster transparency, control and confidence in the algorithms even when dealing with “shades of grey” decision-making.
.
14. Netflix-Style “Surprise” Recommendations
Greg praises Netflix for “learning your taste well enough to surprise you with things you didn’t know you wanted.”
In OSS, surprise is the enemy of reliability. Operators require precise, context-driven insights, not serendipitous suggestions, so this principle would initially seem not to apply. But then when you look from the perspective of the second video above only surfacing the most important insights for human operators, “surprise” could be worthwhile.
Let’s say, for example, a NOC operator is diligently working on clearing away events that the AIOps engine hasn’t been able to resolve automatically. But then, a seemingly innocuous event occurs that has the potential to turn into a runaway Sev1 outage. This “surprise notification” mechanism could be really helpful as a rapid-escalation measure.
.
15. Mood-Based Personalisation (Spotify DJ)
“Spotify becomes your personal DJ who gets better at reading your mood,” Greg notes. While compelling for leisure, I can’t see mood-based personalisation being useful in OSS, where automations are governed by repeatable, deterministic outcomes. What do you think? Am I missing a possible scenario (or scenarios) here, where mood-based personalisation could be incorporated into OSS UI/UX?
.
16. Social-Driven Engagement Metrics (Likes, Shares)
Greg contrasts clicks with trust, but we could also consider whether social / sentiment metrics like “likes” or “shares” could be leveraged by agentic OSS.
OSS platforms typically lack a concept of community-driven engagement. Success is measured by operational outcomes, not by social interactions. There’s no value in “liking” an alarm or a device. However, social sentiments could help with scenarios like data integrity confidence like this example we shared earlier.
.
17. Consumer Community Network Effects
Greg discusses a new network effect based on context accumulation, but consumer community effects rely on many users influencing one another’s experience. Traditional OSS were more like a solo sport, where outcomes have been driven by individual operator-agent actions, not from peer or collaboration networks.
However, perhaps there is the potential for the “power of the crowd” influencing decisions. One example is in using reinforced learning mechanisms to identify the best choice of knowledge entries for resolving a problem.
Another example is through the use of making network operations more of a team sport, facilitating collaboration between teams via clever OSS UIs and agentic approaches to coordinate interactions. This becomes especially important when complex, multi-domain, black-swan events aren’t handled by assurance algorithms. We talked about an example of “collaboration rooms” built into the UI many years ago, but they haven’t seemed to have taken off yet. Perhaps agentic tools can facilitate a re-think?
.
18. AI “Companionship” for Leisure Tasks
While Greg envisions AI as a teammate, he also acknowledges its role in leisure contexts. In the world of OSS, framing the agent as a “companion” risks trivialising mission-critical operations. The focus must remain on rigour, safety and compliance, not on friendly banter. However, it may have the potential to introduce the concept of an “expert advisor,” or mentor, especially as a “canary on the shoulder” of new starters.
.
Recommendations and Next Steps
- Start with an OSS Re-Framing Exercise: We use this technique on many projects, especially ones where we’re helping clients with major transformations, Go To Market or Product re-inventions
- Hire UI/UX (and AX) Designers, even if just on assignment: It always shocks me that I know thousands of tech experts from the world of OSS, but can count on the fingers of one hand the number who are UI/UX experts. Due to the complexity of what our OSS tools are managing and coordinating, we need better designs than almost any other industry I can think of
- Simplify: We must bring down the “Intuition Age” of our solutions
- Pilot 1 – Begin a Context Aggregation Pilot: See what you can do with today’s agentic tools and your data, using the videos above as inspiration. Integrate alarms, tickets, inventory/assets and configuration changes into unified incident timelines and see what sort of interactive UI / UX / AX you can develop
- Pilot 2 – Deploy a Recommendation Engine: Take the pilot above and build an “almost” closed-loop system around it. Offer actionable remediation suggestions with confidence scores and inline explanations (tapping into points 9, 9 and 10 above)
- Pilot 3 – Establish Feedback Loops: Allow operators to confirm or correct suggestions, feeding that data back into the learning model
- Pilot 4 – Iterative Feedback: Allow operators to iteratively “chat-change” your lab network using guided recommendations like in video 3 above
- Pilot to Production – Automate Low-Risk Tasks First: Branch out from the pilot and start running the Pilot System in parallel with current production systems. Start with suppressing false positives and routine threshold adjustments to build trust
- Track Trust Metrics: Measure autonomy ratio, recommendation acceptance rate and operator satisfaction to quantify AI partnership (the trust gradient in point #10 above)
- Build a Network Operations Ninja Academy (NONA) Model: Start to use agentic models to help identify a way to help today’s “apprenticeships” become tomorrow’s “ninjas” in an AI-evolved world
What else am I missing? Probably too many to think of! I’d love to hear your thoughts and what you’re already playing with. Leave us a note in the comments section below!
I’m excited to see where these AX principles take us. If nothing else, I think that these new tools allow us to entirely re-imagine the GUIs and MUIs of our future OSS.

