Continuous Technology Assessment Is Coming
JP Morgan's CISO lays down the new rules for constant security checks that presage continuous technology assessments rather than point-in-time due diligence or audits.
The world’s fifth largest bank just changed the rules for technology management and governance. Boards and company leadership should heed this mandate.
On the eve of the annual RSAC security conference, one of the world’s most important CISOs fired a shot across the bow of technology providers everywhere. In an open letter to third-party technology providers, Patrick Opet, the Chief Information Security Officer of JPMorgan Chase, called for greater prioritization of security over feature velocity.
More importantly, he underscored that any vendor dealing with JPMorgan Chase would have to continuously assess the security of their product and their technology supply chain. JPM would also expect them to provide continuous proof that their systems are safe and are appropriately governed.
The real impact of this letter is far broader than it might initially appear. This is not just about the software industry. It is about every industry. Every company now runs on technology, often relying heavily on external SaaS platforms and tightly integrated APIs that are deeply embedded in their operations.
JPMorgan’s message will reshape how technology is evaluated and governed. The old cadence of annual reviews and point-in-time procurement no longer matches the speed of modern digital operations. AI is accelerating every layer of the technology stack, from development and integration to deployment and optimization. As a result, companies must shift to continuous assessment.
Security may have triggered this shift because it is the most visible source of existential risk. But security is not the only reason for adopting continuous technology assessment. Businesses that continually evaluate the systems they use have a far more accurate picture of what is actually happening. That visibility translates directly into competitive advantage. It enables organizations to leverage technology more effectively to accelerate growth, improve products, boost margins, and reduce costs. In the process, they also reduce exposure to the kinds of silent, system-level failures that tend to become crises.
In our work with companies large and small, and through our involvement in dozens of acquisition evaluation teams at Amazon, Apple, and Microsoft, Techquity partners have witnessed first-hand the wisdom of Opet’s push for continuous technology assessment. Companies that implement this practice are not just avoiding security incidents—they are improving operational resilience, product execution, and financial efficiency at the same time. This triple benefit is driving a structural change in how leading organizations prioritize, measure, and manage technology across the enterprise.
This shift is reaching the highest levels of corporate governance. Boards of Directors are increasingly expected to oversee not just cybersecurity as part of compliance, but the full scope of technology risk and investment. That includes performance, cost-effectiveness, architectural soundness, and resilience. Many boards are actively seeking external expertise to upgrade their technology governance capabilities. They recognize that a reactive, compliance-based model is no longer sufficient. Continuous oversight demands continuous active assessment along with systems and approaches that generate actionable intelligence. For boards, investors and CEOs, technology assessment will become their most important lever for improving not only security but also company performance.
Technology Oversight Must Become a Core Board-Level Competency
Companies that are great at technology outperform. This has been true for decades but is becoming ever more important as technology becomes ever more critical to operations, growth and profits across all industries. Just as board members are expected to understand balance sheets and regulatory frameworks, they must now also understand the basics of technology architecture, software supply chains, and AI deployment risks. As such, technology oversight is no longer the domain of the CIO or CTO alone. It now needs to sit beside finance, HR, and corporate governance as a core board and executive concern. JPMorgan’s letter is both a wake-up call and a blueprint for what comes next. Their vendor demands can force broad changes that flow both upstream and downstream—not only to vendors but to customers and the broad community.
JP Morgan’s demand for continuous security directly translates into a demand for continuous technology oversight and governance. Companies that hone the motion of continuous security assessment will also be building the same capability for continuous technology assessment, if they design their oversight programs correctly to focus not only on metrics but also on process and people. Continuous assessment of technology products, people, process and vendors provides the scaffolding to support this shift. It enables senior leaders to see clearly, act quickly, and allocate capital intelligently—or course correct ahead of catastrophe—in a digital-first environment.
We have personally witnessed how Boards of Directors that embrace this shift gain better visibility into both upside and downside exposure. Rather than relying on static dashboards or audit-driven views, they begin to build dynamic, forward-looking frameworks that allow them to steer technology investments with the same rigor and granularity that they apply to M&A or strategic finance. They can see around corners and can turn technology into an unfair advantage to be leveraged for growth and profits. That includes both internal technology infrastructure and operations, and customer-facing technology capabilities. There is no separation of the two in modern enterprise.
Security Is Just the Starting Point
JPMorgan’s letter highlights a new class of systemic risk created by the SaaS-first, API-connected world. Tightly coupled integrations, shallow authorization models, and over-permissive tokens are dissolving the security boundaries that once separated core systems from outside services. A vulnerability in a single vendor now has the potential to ripple through thousands of interconnected organizations.
This is not a theoretical problem. JPMorgan has already faced multiple incidents involving compromised third-party providers. These required emergency responses, resource-intensive containment, and in some cases, the isolation of vendors from mission-critical environments. If the most fortified financial institution in the world is being forced to take these steps, others are surely even more vulnerable.
Still, the need for continuous assessment extends far beyond security. The very same forces that introduce risk—opaque dependencies, constant iteration, and AI-generated code—also introduce inefficiency, drift, and cost bloat.
AI Demands Real-Time Visibility
AI-driven tooling is transforming how code is written, infrastructure is deployed, and business processes are automated. Developers can now assemble fully functional applications or integrations in a matter of minutes. But speed comes at a price. AI agents often skip vital architectural decisions, gloss over edge cases, and introduce risky dependencies without adequate review.
Worse, AI systems are still prone to hallucination. They may reference nonexistent libraries, pull in unverified code, or introduce open-source packages that have not been vetted or even exist. Cybercriminals have already noticed and are exploiting these blind spots. As AI accelerates build and deployment cycles, it creates intense pressure on governance. Validation, risk management, and optimization must all move at the same pace.
This is where continuous assessment becomes essential. Enterprises must treat their technology stack as a living system. Every service, component, and integration must be evaluated not just at onboarding but continuously, based on real-world usage, performance, cost, and exposure. Technology teams must be measured and supported based on how effectively they solve problems and sustain critical systems. Without this visibility, technology decisions become speculative, performance degrades, and risk accumulates silently.
Cost, Performance, and Team Capability Must Be Measured
Many organizations only realize too late that a vendor, platform, or internal tool, process or team has become a bottleneck. Costs escalate. Reliability declines. Dependencies deepen. Migration becomes politically difficult or operationally dangerous. Likewise, management and boards can learn too late that specific teams or leaders may be bottlenecks, and that lack of healthy processes and proper feedback loops can hamper even cutting-edge technology programs.
Continuous assessment of technology projects, teams and products flips this dynamic. It creates the visibility necessary to monitor real outcomes, identify waste, and reallocate resources. Underutilized services can be retired. Workloads can be shifted to more performant or cost-effective alternatives. Contracts can be renegotiated with data to back up every conversation. Team dynamics and process bottlenecks can be spotted and corrected and health of major initiatives constantly checked.
More importantly, continuous assessment improves agility. In today’s economy, the ability to pivot quickly is just as important as the ability to scale. Markets shift. Regulations change. Competitors move fast. Technology itself is rapidly changing; witness the shifting sands of AI and how much has changed in the past year alone. Enterprises that maintain continuous intelligence and evaluation on their technology teams and initiatives will be able to adapt faster and with more confidence.
Equally important, adaptability and agility is not about speed alone. It is about continuous alignment between business objectives and the technologies that support them. That alignment cannot be maintained through quarterly meetings or static dashboards. It requires frequent oversight, detailed knowledge of technology and adaptive governance at every level — from engineers to the boardroom. Enterprises that succeed in technology build up a pattern recognition of what works and what doesn’t, and this must flow into Board oversight.
From Compliance to Continuous Technology Intelligence
Compliance was designed for a slower era. Most frameworks depend on static certifications, vendor surveys, and annual audits. These artifacts are necessary but insufficient. They do not reflect the current state of a system, much less its performance under real-world conditions.
Leading organizations are already building a new model. They are implementing service-level observability, adopting policy-as-code, and mapping supply chains dynamically. They are using AI not just to build systems, but to validate them. These capabilities are not optional upgrades. They are the foundational requirements of technology leadership in the AI age.
The JPMorgan letter draws a clear line. Enterprise buyers should no longer be willing to accept opacity, fragility, or unverified claims from their technology providers or their internal tech teams. They are demanding accountability and demonstrable assurance. Boards are responding by elevating technology oversight to the same level as financial risk, legal exposure, and reputational management.
Continuous technology assessment is the logical next step. It creates a single source of truth for executives, engineers, and directors alike. It reduces risk, improves performance, and controls cost. Most importantly, it aligns everyone around a shared, real-time view of how technology is actually performing.
Security may have started this conversation. Business value is what will carry it forward. Boards that embrace continuous technology assessment will reap the rewards and create an unfair technology advantage, Boards that ignore it will get left behind — and may lose business from customers like JP Morgan Chase who insist not only on better security but on better products and better operational capabilities that only great technology can drive.
Anthony Bay is CEO and co-founder at Techquity. www.techquity.ai He has previously held senior roles at Amazon, Apple, and Microsoft as well as early-stage companies, served on multiple private and public boards, and launched the world’s first social network before the Internet even existed. He has led product groups, M&A, CEO, and board governance for everything from early-stage startups to large corporations.
Alex Salkever is a partner at Techquity and a former BusinessWeek technology editor, as well as an advisor to startups and large companies on the impacts of technology change and artificial intelligence. He is the author of four award-winning business books, including “Driver in the Driverless Car” and “Your Happiness Was Hacked”.





The letter from Opet focused on security but it really extends to all technology, particularly as it relates to open source. Another point highlighted by my colleague Bill Barton:
From Opet's open letter, this seems really key: "In practice, these integration models collapse authentication (verifying identity) and authorization (granting permissions) into overly simplified interactions, effectively creating single-factor explicit trust between systems on the internet and private internal resources. This architectural regression undermines fundamental security principles that have proven durability." The fact that many people (probably including many of us) tend to abbreviate both authentication and authorization as 'auth', is a signal of the unfortunate conflation.