What If Our Devices Became Our Co-Pilots?
Driving up Highway One, I realized that our devices might stop listening and start acting. Soon.
I was driving up Highway One last weekend. Top down, sun on my shoulders, engine echoing off the cliffs. The ocean was shimmering just out of reach. Then, somewhere in the middle of this perfect rhythm, my Apple Watch buzzed. No cell service. No CarPlay map loaded. Just a gentle vibration telling me my car was making too much noise, as if it were politely asking me to keep it down.
I laughed out loud. In that instant, I realized just how thoroughly I was surrounded by invisible companions. My watch was quietly monitoring my heart rate. My iPhone was collecting tiny location hints, even when I thought I was off the grid. CarPlay was humming quietly in the dashboard, probably capturing more than I’d care to imagine. The car itself is a rolling laboratory that logs cabin acoustics, acceleration patterns, and maybe even micro-adjustments in posture.
The iPhone began as a phone, a music device, a pocket web browser, exactly as Steve Jobs envisioned it on stage all those years ago. But today, almost without us noticing, it has quietly transformed into a spatial scanner, a biometric sentinel, a silent early warning system. Somewhere between two curves, it struck me: Apple may be more prepared for what comes next than any of us realize. The watch on my wrist, the phone in my pocket, CarPlay under my fingertips — each a subtle thread, weaving an intimate tapestry of my daily life. Few other companies have managed to slip so gracefully into our personal spaces.
And when you look at the bigger picture, it’s clear we’re approaching what might be the most significant technological inflection point since the internet revolution: a three-technology convergence framework of AI, advanced sensors, and biotechnology. Apple sits at a uniquely advantageous position within this convergence. They’ve spent years mastering sensors and personal data, building quiet trust, and now they’re integrating AI deeper into the fabric of our daily experiences.
With this, we’re seeing the first hints of what some are calling Large Action Models (LAMs). Early prototypes of systems that don't just understand what we say, but ones that might anticipate what we need before we even ask. These models will move from language to action as their triggers, and from suggestions to intervention as their responses.
Imagine a Personal LAM that senses when you're nearing burnout. It doesn’t just check your calendar or count your steps. It listens to the subtle strain in your voice, notices your restless sleep patterns, and recognizes the slight hesitation before you open another email. It learns the difference between your busy days and your overwhelmed days. It doesn’t wait for you to cancel meetings or block time; it does it for you. It might quietly reschedule a tense investor call, line up a sunny cabin retreat, or even plan a drive up the coast, just like mine that day, complete with your favorite playlist and a quiet overlook waiting for you at sunset. It becomes a gentle force that helps you live not just more productively, but more as a human.
Now imagine a Corporate LAM that reads the pulse of an organization the way a skilled doctor listens to a heartbeat. It feels the drag in decision cycles, spots tiny fractures in cross-functional trust, and senses when a high-performing team is slipping into quiet resignation. It might propose a sudden offsite to spark new energy, suggest a cross-pollination project between unlikely teams, or automatically surface long-forgotten customer insights at just the right time.
This stuff is not just theoretical. Apple’s partnerships with Anthropic and OpenAI hint at a future where Siri isn’t merely a voice assistant but an actual intelligent agent. A layer that might one day weave together our health data, our environment, and our patterns and quietly step in to help before we even realize we need it.
Not to mention, just last week, OpenAI announced the first wave of autonomous ChatGPT Agents that can take actions on your behalf. These systems can plan, execute multi-step tasks, and interact with your files, calendar, and apps without you lifting a finger. This technology is inching closer to becoming the connective tissue that ties everything we do together.
That afternoon, as the sun tilted west and the ocean glowed, I let questions about the future drift ahead without me, like a car disappearing around the next bend. I pulled into a small overlook, engine ticking softly, salt in the air, and the world momentarily hushed.
I thought about how far we’ve come — the quiet genius embedded in every sensor and the layers of intelligence humming beneath our fingertips. And I thought about my own journey through it all: learning the foundations of software back in university, then jumping headfirst into startups where I felt the electricity of building things that worked and mattered.
I watched us move from mainframes to PCs, from client-server to cloud and microservices, from on-prem to SaaS and mobile — each wave teaching us new ways to deliver real value and real results. And now, here we are, standing at the edge of AI, sensors, and even biology converging into something we’re only beginning to imagine. That afternoon, I thought about how much further we might go.
Maybe that’s the startup in me — forever caught between what is and what could be. But sometimes, it’s enough to pause, to feel the sun on your face and the road beneath your wheels, and to remember that the future can wait just a little while longer.
Sometimes the best question isn’t “what if?” — it’s “why not just be here, right now.”