Smart interfaces and AI agents are the next frontier in digital evolution, so what happens when you no longer need a screen to use your tech? Asks Paul Armstrong
We swipe. We tap. We double-click and long-press. For over 20 years, the smartphone has been the interface through which we interact with everything – our friends, our work, our news, our sense of self. But cracks are starting to show. Screens are saturated. Apps are bloated. Notifications are noise. The smartphone isn’t going away overnight, but its cultural and technological dominance is already being challenged and that’s going to mean interesting shifts for every business under the sun.
The shift won’t be to one device. It’ll be to a system; a network of interfaces, sensors and AI agents embedded across our environments, our bodies – and sooner than we’d all probably like, our thoughts. The future is unlikely to be one gadget to replace the phone, we’ve had wearables for years, and while many are popular, none are as ubiquitous as the smartphone. The real shift is how digital life will evolve once we stop needing to hold it in our hands.
We’re moving toward an interface-light world. Smart assistants are no longer just in your phone – they’re in your car, your kitchen, your workplace. Add to that the rise of multimodal AI: models that can see, speak, listen, remember and act across platforms. These systems are increasingly agentic – able to make decisions, complete tasks and anticipate needs without human prompting. We’re in the early days, but already you can see the disruptive elements swirling.
Augmented reality (AR) glasses are inching closer to mainstream viability. Meta’s Ray-Ban smart glasses, Apple’s Vision Pro, and Google’s persistent AR projects are all early attempts at collapsing the screen into your surroundings. No more checking a map on your phone – your route is hovering in front of your face. Need to understand a foreign language? Subtitles appear in real time as someone speaks. Visual overlays, contextual cues and interactive holograms are now part of the design language. We’re getting there, but none have gone mass market because of design mainly, but expect that to change now Meta has a strangle hold on Luxotica. Meta’s forthcoming ‘Orion’ glasses are likely years away, but they’re a good example of how we’ll experience a lot in the future. Sounds Black Mirror-esque? You’re not wrong, people have already demonstrated dangerous privacy issues. Perhaps even contact lenses – after all, they can continually test blood sugar already.
Hands are being slowly phased out as the primary mode of interaction. Apple’s smartwatch already lets you control actions with a double finger-pinch gesture – no taps, no buttons. Airpods allow users to nod to accept a call or shake their head to decline it. Apps can skip Spotify tracks by waving your hand over your phone. These aren’t gimmicks. They’re signals that interaction is decoupling from the device entirely. Of course, accessibility issues still exist, but when the tech is perfected years down the line, we’ll think about things and they will happen.
Can we trust big tech?
Holographic projection wearables like you see in sci-fi? Still sci-fi, but we’re getting closer. Although it doesn’t stop folks like Samsung trying for new wearable wrist-worn bracelet displays, projecting touch interfaces directly onto your arm or any surface. These remain difficult for a number of reasons from physics, to battery life to lighting issues. Neural interfaces are accelerating in parallel. Elon Musk’s Neuralink has already received FDA approval for human trials, aiming for direct brain-computer links or as Musk wants ‘telepathy’. Other players like Synchron and Nextmind (now owned by Snap) are racing to build non-invasive brain input devices that translate intention into digital action – no tapping required. Thank god we can trust all these big tech folks…
Startups are already building headbands that detect neural signals and allow users to control cursors or issue commands with thought alone. Current applications focus on accessibility and medical use, but the direction is clear. Once the input lag between thought and action vanishes, screens stop being necessary. Navigation, communication, and interaction collapse into intention. Again, thank god we can trust these big tech giants with our simple data…
The post-smartphone world won’t rely on opening apps. The app ecosystem, with its locked-in silos and gatekeeping fees, starts to unravel in a world powered by voice, gesture and thought. Instead of launching an app to order a taxi, you say – or think – what you need, and your AI agent handles it. No interface, no friction. Just intent, executed. Oh, and on subscription. Or an agentic AI will order one for you, the idea is it’s all seamless, organised, nothing goes wrong and everyone respects your privacy. Did anyone else just see that flying pig?
Utopia or dystopia
While this utopia or dystopia, depending on how you see it, will take years to come into any semblance of fruition, the seeds are clearly being laid out and entire industries are vulnerable. App design may go the way of web banners – replaced by interaction models that are ephemeral, contextual and fluid. What replaces the homepage? A query. What replaces the button? A glance. Brand visibility won’t rely on icons; it’ll rely on integration into invisible systems. Services will compete not for downloads, but for presence within the AI ecosystems that act on your behalf. How brands compete in this new world is going to be interesting.
Control of the post-smartphone landscape won’t go to whoever builds the best-looking device – it’ll go to whoever owns the decision layer. Apple is building a closed spatial computing stack. Meta wants its AR layers to host your identity, commerce and entertainment. Amazon is embedding Alexa deeper into ambient environments. Google is packaging its generative models into everything from search to wearables. Whoever shapes the interface for your intent ends up shaping the outcome, and there will be many masters.
Delaying adaptation isn’t really an option. Businesses should be preparing now for what interaction looks like without a screen. Optimise for voice, gesture, spatial computing and agentic AI – not in five years, but now. Start by ensuring your services are machine-readable, accessible by AI agents (or protected against them), and interoperable with emerging platforms (or not). Think about what your product looks like when it’s part of an experience, not an app. Rethink how users engage when the trigger is no longer a tap, but a whisper or a blink. Your strategists and UX folks are about to earn their money so keep them close.
The age of screens defined how we communicated, worked and consumed. That phase is ending. Wearables, holographic displays, neural interfaces and ambient AI are dissolving the edges of interaction, shifting the power from physical devices to digital context. What comes after the touchscreen isn’t another rectangle – it’s the end of the rectangle entirely. And for businesses still thinking in pixels and app stores, it’s time to start thinking much bigger.
Paul Armstrong is founder of TBD Group, runs TBD+ and author of Disruptive