MOONSHOT: A Blueprint for Fixing Digital Trust

Toward a Philosophy of Digital Identity: Rethinking Trust, Agency, and Collective Security in the Information Age

The Philosophical Foundations of Our Digital Predicament

We stand at a peculiar moment in human history, one that would have bewildered philosophers from Aristotle to Arendt: billions of people now conduct the most intimate aspects of their lives through systems they cannot see, understand, or meaningfully control. Our capacity to love, work, learn, vote, create, and connect has become inextricably entangled with digital infrastructures that operate according to principles most of us never consented to and logics we never examined. This is not merely a technological challenge, it is a profound philosophical crisis about the nature of human agency, trust, and collective flourishing in an age of ubiquitous computation.

When we speak of "fixing digital trust," we are really asking a much deeper question: How do we preserve human dignity and democratic possibility in a world where our most fundamental social interactions are mediated by systems designed around presumptions of surveillance, extraction, and control? The current credential crisis, the endless cycle of passwords, breaches, identity theft, and technological band-aids, is not just an engineering problem. It is a symptom of a deeper philosophical confusion about what it means to be human in digital space.

To understand why credential reform represents a pathway to broader social transformation, we must first examine the philosophical assumptions embedded in our current systems and imagine alternatives grounded in different principles entirely. This is not abstract theorizing, it is urgent practical philosophy, because the design choices we make about digital identity today will determine whether technology serves human flourishing or undermines it for generations to come.

Identity as Performance vs. Identity as Artifact: A Philosophical Divide

The heart of our current digital crisis lies in a fundamental philosophical error: we have treated identity as a thing rather than as a practice. This mistake reverberates through every aspect of our technological landscape, from the way we design authentication systems to how we think about privacy, consent, and human agency itself.

In the dominant paradigm of digital identity, you are what you possess: passwords, tokens, biometric templates, and credential artifacts that can be copied, stolen, transmitted, and replayed. This approach reflects what we might call an objectivist theory of identity, the notion that selfhood consists in the accumulation and control of discrete, measurable properties. Your identity becomes a collection of secrets you must guard, attributes you must verify, and tokens you must manage across an ever-expanding array of platforms and services.

But consider an alternative philosophical framework, one rooted in what we might call a performative theory of identity. In this view, identity is not something you have but something you do, not a static possession but a dynamic capability that emerges through conscious engagement with the world. When you recognize a friend's voice in a crowded room, when you complete a familiar pattern of movement, when you respond to a situation in a characteristically "you" way, you are performing identity rather than merely asserting it.

This distinction carries profound implications for how we design technological systems. An objectivist approach to digital identity leads inevitably to surveillance capitalism, systems that must collect, store, and analyze vast amounts of personal data to verify that you are who you claim to be. A performative approach, by contrast, opens possibilities for what we might call dignity-preserving authentication, systems that allow you to demonstrate your identity through live capability without exposing the underlying patterns to capture, analysis, or replay.

The technical proposal at the heart of this document, replacing reusable credentials with time-bound, non-exportable proofs of capability, represents more than an engineering improvement. It embodies a fundamentally different philosophy of what human identity is and how it should be expressed in digital space. Instead of asking "What secrets do you possess that prove you are you?" it asks "What capabilities can you demonstrate that only you can perform right now?" This shift from possession to performance opens new possibilities for preserving human dignity in technological systems.

To understand why credential reform matters beyond individual convenience or security, we must examine the broader social contract that governs digital life, and recognize how profoundly that contract has been corrupted by the structural failures of current identity systems.

Consider what philosophers call the problem of social coordination: how do large groups of people who don't know each other personally manage to cooperate, share resources, and build institutions together? Traditional answers have emphasized law, culture, reputation, and shared institutions. But in digital space, coordination increasingly depends on technological trust, our collective confidence that the systems mediating our interactions will behave in predictable, beneficial ways.

This technological trust operates at multiple levels. At the individual level, you must trust that when you log into your bank account, you are actually communicating with your bank rather than an impersonator. At the social level, we must collectively trust that digital platforms will not systematically manipulate election outcomes, that medical records will remain confidential, that financial systems will not be compromised by hostile actors. At the civilizational level, we must trust that the basic infrastructure of digital communication will remain stable and secure enough to support the complex institutions that modern life requires.

The current credential crisis undermines trust at all these levels simultaneously. When passwords can be stolen en masse, when two-factor authentication can be bypassed through social engineering, when biometric templates can be spoofed or leaked, the entire edifice of digital trust becomes fragile. Individual vulnerability becomes collective vulnerability, because systems designed around the presumption of credential security fail catastrophically when that presumption proves false.

But the deeper problem is not just technical fragility, it is the way current identity systems violate basic principles of democratic consent and human dignity. Most people have never meaningfully consented to the surveillance architectures necessary to maintain password-based security. We did not choose to live in a world where every online interaction requires us to surrender personal information to vast corporate databases, where our behavioral patterns are continuously monitored and analyzed, where our capacity to participate in digital life depends on submitting to technologies of control we cannot understand or resist.

This represents what political philosophers call a democratic deficit, a situation where the conditions of social life are determined by systems that operate outside meaningful democratic accountability. The credential crisis is thus not merely a technical problem but a crisis of democratic legitimacy, because it has forced us to accept forms of technological control that we never chose and cannot effectively resist.

Credential reform offers a pathway toward what we might call democratic technology, systems designed to preserve human agency and dignity rather than undermine them. By removing the structural necessity for mass surveillance, data collection, and behavioral monitoring from authentication systems, we can begin to imagine digital technologies that serve democratic values rather than subverting them.

The Paradox of Security: How Protection Became Surveillance

One of the most troubling aspects of our current digital predicament is the way legitimate security concerns have been used to justify increasingly invasive and authoritarian technological practices. This represents a profound philosophical confusion about the relationship between security and freedom, one that credential reform can help clarify and resolve.

The dominant narrative around digital security suggests an inevitable trade-off between safety and privacy, between protection and freedom. To be secure, we are told, we must accept surveillance. To prevent fraud, we must submit to continuous monitoring. To verify our identities, we must surrender our biometric patterns to corporate databases. This narrative frames security and liberty as fundamentally antagonistic forces, locked in eternal tension.

But this framing obscures a crucial philosophical insight: genuine security, the kind that preserves and enhances human flourishing, cannot be achieved through systems that systematically undermine human dignity and agency. When security measures become indistinguishable from surveillance measures, when protection requires submission to unaccountable technological control, the cure becomes worse than the disease.

Consider the psychological and social costs of our current approach to digital security. The constant anxiety about password management, the exhausting cognitive burden of navigating different authentication systems, the learned helplessness that comes from depending on technologies we cannot understand or control, these represent genuine harms to human well-being. When security measures impose such costs on the people they ostensibly protect, we must question whether they are actually serving security or merely creating an illusion of protection while transferring power to technological systems and their controllers.

The philosophical challenge is to develop approaches to security that enhance rather than diminish human agency and dignity. This requires moving beyond the false binary of security versus privacy toward what we might call dignified security, approaches that protect people without treating them as threats, that enhance collective safety without undermining individual autonomy, that provide genuine protection without requiring surrender of fundamental rights.

The credential reform approach outlined in this document embodies principles of dignified security. By eliminating the structural necessity for continuous surveillance and data collection from authentication systems, it demonstrates that protection and privacy can be mutually reinforcing rather than fundamentally antagonistic. When people can prove their identities without exposing their personal patterns to capture and analysis, when sessions can be secured without creating permanent records of user behavior, when fraud can be prevented without building comprehensive surveillance infrastructures, in these cases, security serves human flourishing rather than constraining it.

Technology as Ideology: The Hidden Philosophy of Current Systems

To understand why credential reform represents more than a technical fix, we must examine the ideological assumptions embedded in current digital systems, assumptions that shape not only how technology works but how we think about ourselves, our relationships, and our society.

Every technological system embodies a philosophy, though that philosophy is often invisible to users and sometimes even to designers. The current paradigm of digital identity, built around passwords, behavioral tracking, and biometric surveillance, reflects a particular set of assumptions about human nature, social relationships, and the proper role of technology in human life.

At its core, the current system assumes that humans are fundamentally untrustworthy, that given the opportunity, people will lie, cheat, steal, and deceive. This assumption of universal human unreliability justifies increasingly invasive verification measures, from continuous behavioral monitoring to biometric surveillance to comprehensive data collection. The technology treats everyone as a potential threat who must be constantly verified, monitored, and controlled.

This technological assumption of distrust becomes a self-fulfilling prophecy, shaping how we relate to each other and to social institutions. When digital systems treat us as presumptive threats, we internalize that treatment and begin to relate to technology, and through technology, to each other, with suspicion and defensiveness. The social fabric of trust, which democratic societies require to function, becomes corroded by technologies that assume its absence.

Consider the contrast with a technological philosophy rooted in different assumptions about human nature and social possibility. What if we designed identity systems around the presumption that most people, most of the time, are trying to do the right thing? What if we built technologies that enhanced rather than monitored human capabilities? What if we created systems that treated privacy and autonomy as design requirements rather than obstacles to overcome?

This alternative philosophy, what we might call dignified technology, does not ignore genuine security threats or pretend that fraud and deception don never occur. Instead, it recognizes that security measures should be proportionate to actual risks and that the cure should not be worse than the disease. It assumes that people are capable of learning, adapting, and taking responsibility for their own digital lives when given appropriate tools and information.

The credential reform approach embodies principles of dignified technology by demonstrating that effective security can be achieved without comprehensive surveillance, that identity can be verified without exposing personal patterns to analysis, that fraud can be prevented without treating everyone as a potential criminal. By changing the technological infrastructure of identity, we can begin to change the social and psychological infrastructure of digital life itself.

The Commons of Trust: Individual Security as Collective Responsibility

One of the most profound philosophical insights emerging from the credential crisis is the recognition that individual digital security is not actually individual, it is a collective good that requires collective action to protect and preserve. This insight challenges dominant narratives of personal responsibility and technological self-help, pointing toward more sophisticated understanding of how technology, society, and individual well-being intersect.

In the current paradigm, digital security is framed as primarily a matter of individual responsibility. Users are told to choose strong passwords, enable two-factor authentication, be vigilant about phishing, and keep their devices updated. This framing suggests that security breaches result primarily from individual failures, users who were not careful enough, not educated enough, not diligent enough in following security best practices.

But this individualistic framing obscures the structural dimensions of digital insecurity. When credential databases are breached, when authentication systems are compromised, when social engineering attacks succeed, these failures typically result from systemic vulnerabilities rather than individual mistakes. Even the most security-conscious users remain vulnerable to attacks that target the underlying infrastructure rather than their personal behavior.

More fundamentally, the individualistic approach to digital security ignores the network effects that make everyone's security dependent on everyone else's security. When poorly secured systems can be used as stepping stones for attacks on better-secured systems, when compromised accounts can be used to target their contacts, when successful attacks on some users provide intelligence for attacks on others, in these contexts, individual security becomes indivisible from collective security.

This insight points toward what we might call a commons-based approach to digital security, one that recognizes digital trust as a shared resource that requires collective stewardship to maintain. Just as environmental protection requires coordinated action rather than merely individual behavior change, digital security requires systemic reform rather than merely personal vigilance.

The credential reform proposal embodies principles of commons-based security by addressing structural vulnerabilities rather than simply demanding better individual behavior. By eliminating the systemic weaknesses that make credential theft profitable and scalable, it protects not just individual users but the broader ecosystem of digital trust that everyone depends on.

This commons-based approach also has important implications for how we think about the responsibilities of technology companies, governments, and other powerful actors in digital space. If digital trust is indeed a commons that everyone depends on, then those with the power to shape technological systems have special responsibilities to protect and preserve that commons. The current regime, which socializes the costs of security failures while privatizing the profits from insecure systems, represents a form of what economists call market failure, one that requires collective action to address.

Democratic Technology: Participation, Transparency, and Collective Choice

Perhaps the most important philosophical dimension of credential reform concerns its implications for democratic governance in the digital age. The current crisis of digital identity is inseparable from broader concerns about technological power, democratic accountability, and collective self-determination in increasingly technology-mediated societies.

The fundamental democratic principle is that people should have meaningful say in the conditions that govern their lives. But our current digital systems operate largely outside democratic accountability. The technologies that shape how we communicate, work, learn, and relate to each other are designed by private companies according to logics of profit maximization rather than public welfare. The algorithmic systems that determine what information we see, what opportunities we encounter, and how we are categorized and treated operate as black boxes, inscrutable to the people they affect.

This represents what political scientists call a democratic deficit, a situation where power is exercised without meaningful accountability to those subject to that power. The credential crisis exemplifies this deficit: most people have never consented to live in a world where every digital interaction requires them to submit to surveillance technologies, where their personal patterns are continuously collected and analyzed, where their capacity to participate in digital life depends on accepting terms of service they cannot meaningfully negotiate.

Credential reform offers a pathway toward more democratic technological arrangements, systems designed to preserve human agency and choice rather than systematically undermining them. By eliminating the structural necessity for mass surveillance from authentication systems, it creates space for more participatory and accountable approaches to digital governance.

Consider what democratic technology might look like in practice. Instead of systems that operate as black boxes, we might have technologies whose logic is transparent and auditable. Instead of platforms that impose their terms unilaterally, we might have services that genuinely negotiate with users about acceptable tradeoffs between functionality and privacy. Instead of companies that accumulate vast power through data collection, we might have organizations that derive their legitimacy from serving user interests rather than exploiting user vulnerabilities.

These possibilities require more than technical innovation, they require new institutional arrangements, new legal frameworks, and new cultural expectations about how technology should relate to democratic values. But credential reform provides a crucial foundation by demonstrating that effective, convenient digital services do not require comprehensive surveillance and behavioral control.

The Ethics of Technological Choice: Agency, Autonomy, and Human Flourishing

At its deepest level, the credential crisis raises fundamental questions about human agency and autonomy in technological societies. What does it mean to live freely when so much of life is mediated by systems we cannot understand or control? How do we preserve meaningful choice when technological arrangements systematically constrain the range of available options? These questions connect technical debates about authentication systems to profound philosophical concerns about human dignity and flourishing.

The current paradigm of digital identity systematically undermines human agency in several ways. First, it creates technological dependencies that most people cannot meaningfully avoid. While participation in digital systems is theoretically voluntary, the practical alternatives have become so limited that choice is largely illusory. Second, it operates through systems so complex that meaningful informed consent becomes impossible. Most people cannot understand the implications of the technological arrangements they are asked to accept. Third, it creates lock-in effects that make it extremely difficult to change course once initial choices have been made.

These constraints on agency are not merely inconvenient, they represent genuine harms to human dignity and democratic capacity. When people cannot meaningfully choose the technological conditions that govern their lives, when they cannot understand or resist systems that monitor and control their behavior, when they cannot experiment with alternative approaches to digital life, in these circumstances, technology becomes a form of subtle but pervasive domination.

Credential reform offers a pathway toward what we might call technological autonomy, arrangements that enhance rather than constrain human agency and choice. By reducing the technical complexity of authentication systems, by eliminating the necessity for continuous surveillance, by creating systems that people can understand and control, these changes create new possibilities for genuine technological choice.

But technological autonomy requires more than technical innovation, it requires cultural and institutional changes that prioritize human agency over system efficiency, user control over platform convenience, democratic accountability over private profit. The credential reform proposal should be understood as one element of a broader project of democratizing technology and ensuring that technological power serves human flourishing rather than undermining it.

Collective Action and Social Transformation: From Individual Fixes to Systemic Change

The final philosophical dimension of credential reform concerns its relationship to broader processes of social change. Technical innovations do not automatically translate into social progress, they must be embedded in collective projects of institutional reform, cultural change, and political mobilization.

The history of technology is full of examples of innovations that promised liberation but delivered new forms of control, promised democratization but concentrated power, promised connection but enabled surveillance. The difference between liberating and oppressive technological change depends not primarily on the technical characteristics of innovations but on the social contexts in which they are developed, deployed, and governed.

This insight suggests that credential reform cannot succeed as merely a technical project, it must be part of a broader social movement toward more democratic, participatory, and accountable technological arrangements. Technical innovations like phishing-resistant authentication provide tools for social change, but they do not automatically produce that change.

What would such a movement look like? It might include demands for technological transparency and accountability, support for public alternatives to private technological platforms, advocacy for stronger privacy rights and democratic oversight of algorithmic systems, and education about the social implications of technological choices. It would connect technical experts, policy advocates, civil rights organizations, and ordinary users around shared commitments to human dignity and democratic values.

Credential reform provides a particularly powerful focus for such a movement because it connects technical improvements with broader values that most people share: privacy, security, autonomy, and democratic accountability. By demonstrating that effective technology can serve human values rather than subverting them, it provides concrete evidence that alternative technological futures are possible.

Conclusion: Toward a Philosophy of Technological Hope

The credential crisis that motivates this document is ultimately a crisis of technological imagination. We have become so accustomed to digital systems that operate through surveillance, control, and extraction that we have begun to assume these characteristics are inevitable features of digital technology itself. But this assumption reflects a failure of imagination rather than technical necessity.

The philosophical framework developed here suggests that technology is neither inherently liberating nor inherently oppressive, it is a site of ongoing social struggle whose outcomes depend on human choices about design, deployment, and governance. The current crisis of digital identity is not an inevitable result of technological development but a consequence of particular choices made by particular actors under particular institutional arrangements.

This recognition opens space for what we might call technological hope, the conviction that alternative technological futures are possible and worth working toward. Such hope is not naive optimism but realistic assessment of human capacity for collective action and social change. Throughout history, societies have proven capable of reshaping technological systems to better serve human values when sufficient numbers of people organize around shared commitments to justice and human dignity.

Credential reform provides a concrete example of what technological hope might look like in practice: technical innovations that serve human flourishing rather than undermining it, system designs that preserve privacy and autonomy rather than requiring their sacrifice, approaches to security that enhance rather than constrain human agency and democratic participation.

But realizing the potential of such innovations requires more than technical development, it requires collective commitment to building technological systems that reflect our deepest values and highest aspirations. The moonshot described in this document is ultimately not just about fixing authentication systems but about demonstrating that technology can serve human dignity and democratic possibility rather than subverting them.

The stakes could not be higher. The technological choices we make in the coming years will determine whether digital systems become tools of human liberation or instruments of subtle but pervasive control. By approaching credential reform as part of a broader philosophical and political project of democratizing technology, we can help ensure that these choices serve justice, dignity, and human flourishing for generations to come.

Why call cybersecurity a "moonshot"?

A moonshot is the kind of project we attempt when the problem is too big, too interconnected, and too important to ignore. Digital security and privacy qualify on all counts. Billions of people now live, work, vote, bank, and learn through software and networks. Keeping that entire fabric safe isn't a tweak, it's a generational lift that demands new ideas, new incentives, and coordinated work across sectors and borders. That is the spirit of a moonshot.

The original moonshot, landing humans on the moon and returning them safely to Earth, required unprecedented coordination between government agencies, private contractors, universities, and international partners. It demanded breakthrough innovations not just in rocket science, but in materials engineering, computer systems, telecommunications, life support, and project management. Most importantly, it required a shared vision compelling enough to sustain political commitment, public support, and resource allocation across multiple election cycles and economic conditions.

Digital security presents a challenge of comparable scope and complexity. Like the Apollo program, it requires breakthrough innovations across multiple domains simultaneously: cryptography and authentication systems, yes, but also user experience design, economic incentive structures, regulatory frameworks, international cooperation protocols, and cultural change management. Unlike the moon landing, however, the digital security challenge has no clear finish line, it requires building sustainable systems that can evolve and adapt as both threats and opportunities change over time.

The interconnected nature of digital systems amplifies both the challenge and the opportunity. When a hospital's systems are compromised, the impact cascades through insurance networks, pharmaceutical supply chains, emergency response coordination, and patient care delivery. When election infrastructure is attacked, the damage extends beyond vote tallies to public confidence in democratic institutions themselves. When financial systems are breached, the effects ripple through global markets, retirement accounts, small business operations, and individual families trying to pay their bills.

But this interconnectedness also means that improvements in digital security create cascading benefits. When authentication systems become more robust, fraud rates drop across entire economic sectors. When privacy protections become stronger, innovation accelerates as people become more willing to engage with digital services. When democratic institutions become more trustworthy, civic participation increases and social cohesion strengthens.

The scale of required coordination rivals that of addressing climate change or managing global pandemic response. It requires collaboration between technology companies that normally compete fiercely, government agencies with different mandates and priorities, international partners with varying political systems and economic interests, academic researchers working on different timescales than industry, and civil society organizations representing diverse communities and values.

But "moonshot" isn't just rhetoric about scale and complexity. It is a call to organize around what matters most: trust. When people trust that their accounts, votes, conversations, and records stay in the right hands, they participate more fully, economies run more smoothly, and democratic norms hold. When that trust erodes, we see the opposite: hesitancy, scams, polarization, and a general sense that the system is rigged. The pay-off for doing this right isn't only fewer hacks; it's a healthier society.

Trust, however, cannot be manufactured through marketing campaigns or policy declarations, it must be earned through demonstrable improvements in actual security outcomes. This requires moving beyond the current paradigm of treating cybersecurity as primarily a defensive exercise, focused on building higher walls around increasingly complex systems. Instead, we need offensive strategies that change the fundamental economics of digital attacks by eliminating the reusable artifacts that make credential theft profitable at scale.

The moonshot metaphor also captures something essential about timescale and ambition. Incremental improvements to existing password-based systems, while valuable, cannot address the structural vulnerabilities that make digital life precarious for billions of people. We need breakthrough innovations that fundamentally change the game, making the most profitable attack vectors unprofitable, making the most common vulnerabilities obsolete, making the most complex security tasks simple enough for ordinary people to manage effectively.

Like the original moonshot, this effort requires not just technological innovation but cultural transformation. We must move from a paradigm where digital security is treated as an individual responsibility, where people are blamed for falling victim to increasingly sophisticated attacks, to one where security is understood as a collective good that requires collective action to protect. This means designing systems that are secure by default rather than secure only for experts, accessible to people with diverse abilities and technical backgrounds, and aligned with democratic values rather than authoritarian control.

The moonshot framing also emphasizes the inspirational dimension of this challenge. Just as the Apollo program captured public imagination and motivated a generation of students to pursue careers in science and engineering, the digital security moonshot can inspire new forms of civic engagement, technological innovation, and international cooperation. When we succeed, not just in building better authentication systems, but in demonstrating that technology can serve human flourishing rather than undermining it, we provide concrete evidence that other seemingly intractable collective challenges can also be addressed through coordinated effort and shared commitment to the common good.

What's really at stake, for people and for society

For individuals. Privacy is the practical side of dignity. It is the ability to decide who knows what about you and when. In the digital world, that isn't a philosophical luxury; it's daily life. Weak security exposes people to identity theft, harassment, and the endless grind of trying to re-secure compromised accounts. Victims often report lasting stress and a sense of violation, especially after fraud or the non-consensual exposure of private information. The mental health impact is real.

But the individual stakes extend far beyond immediate financial losses or even psychological trauma. When digital security fails, it undermines people's capacity to live autonomous lives. Consider a domestic abuse survivor whose location is tracked through compromised accounts, making escape impossible. Think about activists whose private communications are intercepted, exposing them and their networks to retaliation. Picture small business owners whose customer data is stolen, destroying years of relationship-building and reputation. These aren't edge cases, they're predictable consequences of systemic security failures.

The cumulative effect of these individual vulnerabilities creates what researchers call "learned helplessness" around digital technology. People stop experimenting with new tools, avoid beneficial services, and retreat from opportunities that could improve their lives, not because they lack interest or capability, but because they have learned through repeated experience that digital engagement carries unacceptable risks. This learned helplessness disproportionately affects older adults, people with disabilities, those with limited technical education, and anyone who has experienced digital harm in the past.

The psychological burden of managing digital security under current systems is itself a form of inequality. Those with the time, resources, and technical knowledge to implement strong security practices gain access to more opportunities, while those without these advantages find themselves increasingly excluded from digital life. Password managers, two-factor authentication, VPNs, encrypted messaging, these tools provide protection, but only for those who can afford them, understand them, and have the cognitive bandwidth to manage them consistently.

For communities and democracy. Privacy supports free association and candid debate. When people suspect they're constantly watched, or that their data can be exploited, they pull back from civic life. That erodes social trust and weakens democratic institutions that rely on open participation. Economically, the costs stack up: incident response, legal liability, and the lost growth when people avoid online services because they don't feel safe. The burden falls hardest on already-marginalized groups, widening inequality.

The democratic stakes of digital security failures extend beyond individual withdrawal from civic life to systematic distortion of democratic processes themselves. When election systems are compromised or perceived as vulnerable, the legitimacy of democratic outcomes comes under question. When media organizations are hacked and false information is disseminated through their trusted channels, the shared information environment that democracy requires begins to collapse. When government agencies responsible for providing essential services are compromised, public confidence in institutional competence erodes.

These effects compound across time and scale. Each successful attack makes the next attack more likely, not just technically but politically and socially. Citizens who have experienced digital fraud become more suspicious of legitimate digital services. Voters who have seen election systems attacked become more skeptical of electoral outcomes. Businesses that have suffered data breaches become more reluctant to adopt new technologies that could improve productivity and innovation.

The economic implications operate at both macro and micro levels. For individuals, digital insecurity functions as a regressive tax, those least able to afford losses are most vulnerable to scams, fraud, and identity theft, while those with resources can purchase protection through private security services, exclusive platforms, and sophisticated technical tools. For businesses, security failures impose direct costs through incident response, legal liability, and regulatory penalties, but also indirect costs through reduced customer trust, employee productivity losses, and constrained innovation as security concerns override growth opportunities.

At the societal level, digital insecurity creates what economists call negative externalities, costs imposed on third parties who were not involved in the original transactions. When one organization's poor security practices enable attacks on others, when compromised systems are used as stepping stones for broader campaigns, when successful attacks provide intelligence for future attacks, in these cases, the full costs of insecurity are socialized while the benefits of cutting security corners remain privatized.

For global stability and international relations. Digital security failures increasingly intersect with traditional national security concerns in ways that complicate international cooperation and escalate conflict risks. When critical infrastructure is vulnerable to cyber attacks, when election integrity can be questioned due to digital manipulation, when economic systems can be disrupted through credential compromise, these vulnerabilities become strategic weapons that hostile actors can exploit to destabilize adversaries without triggering traditional military responses.

The interconnected nature of digital systems means that security failures in one country can cascade internationally, creating diplomatic tensions and economic disruptions that span borders. When a major supply chain attack originates from compromised credentials in one nation, affects critical infrastructure in a second, and disrupts economic activity in a third, the resulting crisis requires international coordination to resolve, but also provides opportunities for blame-shifting and conflict escalation.

Bottom line. Treat privacy and security as two sides of the same coin: without security, privacy promises collapse; without a culture that values privacy, security becomes an arms race with no public mandate. The stakes of getting this right extend from the most intimate personal relationships to the highest levels of international relations. When digital security serves human flourishing, it enables individuals to live autonomous lives, communities to engage in democratic participation, and societies to cooperate across differences. When digital security fails, it undermines all these foundations simultaneously, creating cascading vulnerabilities that can take generations to repair.

The bottleneck we all share: passwords and credentials

Most of today's damage flows through one chokepoint: compromised credentials. Passwords (and the codes that try to shore them up) are still the primary gatekeepers to our lives. They are also the most harvested, relayed, phished, guessed, reused, and resold artifacts in the entire ecosystem. Attackers don't need to "hack" a system if they can just log in the way you do. That's why fixing the credential problem is the fastest way to reduce risk everywhere.

The scale and sophistication of credential-focused attacks have reached industrial proportions. Criminal organizations now operate credential-harvesting operations with the efficiency and specialization of legitimate businesses. They maintain customer service departments to help buyers use stolen credentials, develop sophisticated market research to identify the most valuable targets, and invest in continuous innovation to stay ahead of defensive measures. Underground marketplaces offer credentials organized by price, geography, account type, and guaranteed freshness, with volume discounts and money-back guarantees.

This industrialization is enabled by fundamental structural weaknesses in how we approach digital identity. Every password-based system creates what security researchers call a "credential honeypot", a centralized target that, if compromised, provides access to thousands or millions of user accounts simultaneously. Even when organizations implement sophisticated security measures around these honeypots, they remain attractive targets because the payoff for successful attacks is so high and the technical barriers to exploitation are so low.

Why do credentials fail so often?

  • Ubiquity: The same password (or its variants) often gates many accounts. Breach one, and you've breached several.

  • Human realities: People reuse passwords and store them badly because the cognitive load is high. Attackers know this and automate around it.

  • Modern attack tooling: Phishing kits, keyloggers, credential stuffing, and "reverse proxy" phish pages turn password capture into an assembly line.

The ubiquity problem extends beyond individual password reuse to systemic password reuse across entire organizations and sectors. When people use similar patterns for creating passwords, common substitutions, predictable additions of numbers or symbols, familiar phrases from their personal or professional lives, they create vulnerabilities that attackers can exploit across multiple targets simultaneously. Intelligence gathered from breaching one organization provides insights that accelerate attacks against others, creating network effects that amplify individual vulnerabilities into systemic risks.

Human realities around password management reflect deeper tensions between security and usability that current systems have never adequately resolved. The cognitive burden of managing unique, strong passwords for dozens or hundreds of accounts exceeds most people's working memory capacity. Even sophisticated users who understand security principles often resort to patterns and shortcuts that undermine their security goals, because the alternative, complete randomness with no memorable patterns, creates unacceptable risks of being locked out of essential accounts.

The problem is compounded by the mismatch between how security experts think about passwords and how ordinary people actually use them. Security guidance often assumes that people interact with passwords in isolation, that choosing a strong password for one account is an independent decision unaffected by other cognitive demands. But in reality, people make password decisions in the context of managing complex lives with multiple competing priorities, limited attention, and varying levels of stress and distraction.

Modern attack tooling has evolved to exploit these human factors with ruthless efficiency. Phishing kits now include psychology-based elements designed to create time pressure, emotional manipulation, and authority compliance that override rational security decision-making. Keyloggers and credential-harvesting malware operate with such sophistication that they can remain undetected for months or years while collecting passwords, tokens, and behavioral patterns. Credential stuffing operations use machine learning to optimize password guessing based on patterns observed across millions of previous breaches.

Perhaps most troubling is the emergence of "reverse proxy" phishing techniques that position attackers as invisible intermediaries in seemingly legitimate authentication flows. These attacks defeat traditional security education because victims are actually visiting the correct websites and entering their credentials into legitimate login forms, but those forms are being silently captured and replayed by attackers in real time. The technical sophistication of these attacks means that even security-conscious users who carefully verify URLs and look for security indicators can be successfully targeted.

The economic incentives around credential theft have created a feedback loop that accelerates the development of increasingly sophisticated attack techniques. Because credentials can be monetized quickly and reliably through multiple channels, direct account abuse, sale in underground markets, use as stepping stones for larger attacks, criminal organizations have strong incentives to invest in research and development for credential harvesting. The return on investment for these activities remains extremely high because the fundamental vulnerabilities they exploit are structural features of password-based systems rather than implementation bugs that can be easily fixed.

The defensive response has focused primarily on adding additional layers of authentication, two-factor codes, biometric verification, risk-based analysis, but these approaches often create new vulnerabilities while failing to address the underlying structural problems. Additional authentication factors create additional targets for attack, more complex user experiences that increase cognitive burden, and new categories of failure modes that can leave users locked out of their own accounts.

There's no shame in acknowledging this: the credential layer is our weakest link. If we want to change outcomes, start here. But addressing the credential problem requires more than incremental improvements to existing password-based systems. It requires fundamentally rethinking the relationship between identity, authentication, and security in digital systems. This means moving from approaches based on secret-sharing to approaches based on capability demonstration, from reusable tokens to ephemeral proofs, from centralized honeypots to distributed verification systems.

The credential bottleneck represents both the greatest vulnerability and the greatest opportunity in current digital security landscape. Because credential compromise is involved in the majority of successful attacks, improvements to authentication systems can have outsized impact on overall security outcomes. More importantly, fixing the credential problem opens possibilities for rebuilding digital trust on foundations that serve human flourishing rather than undermining it.

A different way to think about logging in: ability, not artifacts

The core idea behind your "Eni6ma / Rosario-Wang" approach is simple enough for anyone to grasp:

Don't ship secrets around. Prove, in the moment, that you can recognize a private pattern only you (or your device) can see, and do it in a way that never reveals the pattern.

In other words, identity as live ability, not as a reusable object. The system shows you a short, time-keyed visual (or audio/haptic) "projection." Because you hold a private mental or sealed device "map," the target feature pops out instantly for you. You respond with a masked, non-revealing answer that the system can check, but from which no one can reconstruct your secret. The proof is accepted for this moment only; even a perfect recording of the session can't be replayed later. (Below we'll unpack where this helps in the real world.)

This represents a fundamental philosophical shift in how we understand digital identity. Traditional authentication systems treat identity as a collection of artifacts, passwords, biometric templates, cryptographic keys, behavioral patterns, that can be stored, copied, transmitted, and verified. These artifacts become targets for theft because they retain their value across time and context. If someone steals your password today, they can use it tomorrow, next week, or next month until you change it.

The ability-based approach eliminates this temporal vulnerability by making identity verification a live, contextualized performance rather than a static credential check. Instead of asking "Do you possess the correct secret?" it asks "Can you demonstrate the correct capability right now?" This distinction matters because capabilities cannot be copied in the same way that artifacts can. You cannot steal someone's ability to recognize patterns, solve problems, or perform coordinated movements, you can only observe their performance of these abilities in specific moments.

Consider an analogy from the physical world. A traditional key can be copied if someone gains temporary access to it, and that copy will work indefinitely. But if building access required demonstrating your ability to play a specific melody on a piano, copying becomes much more difficult. An attacker would need to not only observe your performance but also develop the same musical capability, and even then, their performance could be distinguished from yours through subtle variations in timing, emphasis, and style.

The technical implementation of this philosophy relies on mathematical properties that make observation useless for replication. The system generates a time-bound challenge that appears random to observers but contains recognizable patterns for someone holding the appropriate private "map" or key. Your response demonstrates pattern recognition without revealing the underlying pattern itself. The mathematical relationship between challenge and response is designed so that correct responses prove capability without exposing the knowledge that enables that capability.

Two practical consequences follow:

  1. Nothing reusable leaks. If attackers record everything, they still get nothing that works tomorrow.

  2. Keys emerge only after a pass. If you pass the check, the system derives an ephemeral session key, no static keys in transit and none left lying around at rest.

This shift, from "who has the secret" to "who can do the work right now", is what makes the approach resilient against phishing, replay, deepfakes, and even highly persuasive AI-written scams. (A convincing message can't authorize anything unless a live proof passes.)

The resilience extends beyond technical attacks to social engineering. Because the authentication process requires live capability demonstration rather than recall of stored information, attackers cannot use psychological pressure, authority manipulation, or time constraints to extract reusable credentials. Even if someone convinces you to perform an authentication ceremony, that performance is bound to a specific moment and context, it cannot be captured and replayed elsewhere.

A tiny bit of helpful math (with plain talk): If each quick round is like choosing the correct region out of six, then a random guesser's chance to pass $h$ rounds is $(1/6)^h$. TTS description: "Probability equals one divided by six, raised to the h power."

That's a fancy way of saying "luck runs out fast." Increase $h$ slightly and you make random success astronomically unlikely, while keeping the human task short and easy. (The system can tune $h$ to match risk.) This mathematical foundation provides security guarantees while maintaining usability, legitimate users experience the authentication as intuitive pattern recognition, while attackers face exponentially decreasing odds of success through guessing.

Why "privacy requires security" (and not the other way around)

It's tempting to treat privacy as a settings screen or a legal checkbox. In practice, privacy rides on working security:

  • Confidentiality: Without encryption and access controls, data can't be kept from the wrong eyes.

  • Integrity: If malware or a compromised system can alter records at will, no promise of privacy stands.

  • Authentication: If anyone can impersonate you, they can fetch your data.

This dependency relationship is often misunderstood in both public discourse and policy discussions. Privacy advocates sometimes argue for strong privacy protections without acknowledging the security infrastructure necessary to make those protections meaningful. Security professionals sometimes implement robust technical measures without considering whether those measures actually serve privacy goals or inadvertently undermine them. The result is often systems that are neither private nor secure, failing to achieve either objective effectively.

Consider what privacy actually means in operational terms. Privacy is not simply the absence of observation, it is the presence of meaningful control over who has access to what information under what circumstances. This control requires reliable mechanisms for enforcing access decisions, detecting unauthorized access attempts, and maintaining the integrity of access control systems themselves. Without these security foundations, privacy policies become empty promises that cannot be enforced when they matter most.

The confidentiality principle illustrates why security must come first. Even the strongest privacy laws are meaningless if the technical systems they govern cannot reliably protect information from unauthorized access. Encryption provides the mathematical foundation for confidentiality, but encryption is only as strong as the key management systems that control access to encrypted data. If authentication systems are compromised, encryption keys can be stolen. If access control systems are bypassed, encrypted data can be decrypted by unauthorized parties. If communication channels are intercepted, encrypted messages can be captured and analyzed.

Integrity attacks represent an even more insidious threat to privacy because they can be virtually invisible to both users and privacy advocates. When attackers can alter records without detection, they can manipulate privacy settings, modify access logs, change user preferences, and rewrite the very policies that are supposed to protect user data. Integrity violations can make privacy violations look like legitimate access, hiding the breach while enabling ongoing surveillance and data collection.

The authentication foundation is perhaps most critical because it underlies both confidentiality and integrity protections. If authentication systems can be compromised, attackers can impersonate legitimate users to access confidential information and impersonate legitimate administrators to modify integrity protections. This is why credential theft is so damaging to privacy, it allows attackers to bypass privacy protections by appearing to be authorized users rather than external threats.

Treat these as a stack: if the base is weak, the promises above it crumble. That is why "fixing the credential layer" and "reducing replay" aren't arcane engineering tasks; they are the foundation for the freedoms we associate with privacy.

This stack metaphor reveals why attempts to achieve privacy without security often fail catastrophically. Privacy laws that mandate disclosure controls but ignore authentication vulnerabilities create systems that appear to protect user data while actually making that data more accessible to attackers. Privacy-preserving technologies that focus on limiting data collection but neglect access control create databases that are simultaneously privacy-compliant and security-compromised.

The reverse dependency, security requiring privacy, is much weaker and more contingent. While privacy concerns can motivate security investments and privacy-conscious design can lead to more secure systems, security objectives can often be achieved through surveillance and control rather than privacy protection. Authoritarian security models that monitor everyone to identify threats can be technically effective while being privacy-destructive. This asymmetry explains why security often develops at the expense of privacy rather than in partnership with it.

The credential reform approach recognizes this dependency relationship by designing security systems that enhance rather than compromise privacy. By eliminating the need for continuous surveillance, behavioral monitoring, and comprehensive data collection from authentication systems, it demonstrates that robust security can be achieved while strengthening rather than weakening privacy protections. This alignment between security and privacy objectives is not automatic, it requires deliberate design choices that prioritize both goals simultaneously rather than treating them as competing concerns.

Where this approach helps, concretely

A. Elections and democratic processes

One of the most direct public-interest uses is voter identity. The idea isn't to surveil citizens; it's to anchor every ballot to a non-replayable, time-scoped proof that the right person is present, without exporting biometric templates or passwords that could be stolen. The proposal goes further: provide this capability to governments at no cost, precisely to raise baseline trust in election integrity everywhere.

Current election security faces a fundamental dilemma: how to verify voter eligibility without creating surveillance infrastructure that could be misused for voter suppression or political targeting. Traditional approaches require storing voter identification data in centralized databases that become attractive targets for both foreign adversaries and domestic bad actors. When these databases are breached, as happened in multiple states during recent election cycles, the compromise affects not just current elections but creates long-term vulnerabilities as stolen voter data can be used for registration fraud, targeted disinformation campaigns, and identity theft.

The ability-based approach resolves this dilemma by enabling voter verification without persistent data storage. During voter registration, citizens would enroll in a privacy-preserving identity system that binds their eligibility to demonstrate a non-exportable capability rather than to possess a copyable credential. On election day, voters prove their identity through a brief, time-scoped demonstration that generates no reusable artifacts—no stored biometrics, no cached passwords, no persistent tokens that could be stolen and misused later.

This system provides several critical advantages for democratic processes. First, it eliminates the "honeypot" problem where centralized voter databases create single points of failure that, if compromised, can undermine confidence in entire electoral systems. Second, it enables real-time verification of voter presence without creating permanent records that could enable post-election surveillance or retaliation. Third, it provides cryptographic proof of election integrity that can be audited without compromising voter privacy.

The public-good dimension is crucial. By offering this capability to governments at no cost, the approach ensures that election security improvements aren't limited to wealthy jurisdictions that can afford expensive proprietary solutions. This democratizes access to state-of-the-art election security, helping to raise baseline trust in democratic processes globally. When citizens can be confident that their votes are both secret and counted accurately, democratic participation increases and social cohesion strengthens—benefits that extend far beyond individual elections to support the long-term health of democratic institutions themselves.

B. "Fake news" and trustworthy reporting

A simple way to strengthen trust in journalism: let reporters cryptographically sign their work with a credential bound to a private, non-exportable identity, so audiences and platforms can verify "this story came from a known journalist," while the underlying method resists cloning and replay. This raises the bar for mis-attribution and impersonation without putting authors' raw biometrics on the wire.

The erosion of trust in news and information represents one of the most serious threats to democratic society. When audiences cannot distinguish between legitimate journalism and manufactured content—whether produced by malicious actors, AI systems, or coordinated disinformation campaigns—the shared factual foundation necessary for democratic deliberation begins to collapse. Traditional approaches to this problem, such as platform-based fact-checking or algorithmic content moderation, often create new problems by concentrating editorial power in the hands of technology companies rather than journalistic institutions.

Cryptographic provenance offers a different approach: rather than having platforms decide what information is trustworthy, enable audiences to verify the source credentials of content creators directly. When journalists sign their work using non-exportable identity credentials, readers can verify that a story actually came from a known reporter without relying on platform intermediaries. This verification works even when content is shared across multiple platforms, forwarded through social media, or republished by third parties.

The technical implementation prevents common attack vectors against journalistic credibility. Because signing credentials are bound to non-exportable capabilities rather than copyable tokens, adversaries cannot steal a journalist's identity and use it to publish false stories under their name. The time-scoped nature of proofs means that even if someone captures a signing ceremony, they cannot replay it to authenticate different content. This creates much higher barriers for sophisticated disinformation operations that currently rely on compromised social media accounts, stolen credentials, or impersonation of legitimate news organizations.

For news organizations, this approach provides measurable benefits beyond security. Cryptographically signed content can be automatically prioritized by search engines and social media algorithms, giving legitimate journalism competitive advantages over unsigned content. Publishers can track how their content spreads across the internet while maintaining attribution to original reporting. Freelance journalists can build portable reputations that follow them between publications, reducing barriers to entry in an increasingly fragmented media landscape.

The broader social impact extends beyond individual news organizations to the health of public discourse itself. When audiences can easily verify content provenance, they become more discerning consumers of information. When bad actors face higher technical barriers to impersonation, the overall quality of information in circulation improves. This helps restore the shared epistemological foundation that democratic societies require to function effectively.

C. Child protection and missing-child response

Early, privacy-respecting enrollment (with parental consent and tight policy limits) can support fast, verified presence checks in emergencies, without building a searchable, leak-prone biometric database. The point is rapid aid when it matters, not surveillance when it doesn't.

Child safety represents one of the most emotionally charged and technically challenging areas for identity systems. Traditional approaches often force impossible choices: either accept the risks of delayed identification in emergency situations, or build comprehensive surveillance infrastructure that creates new vulnerabilities for the children it's meant to protect. Current biometric databases used for child identification create attractive targets for criminals, predators, and hostile governments—when compromised, they expose children to lifelong risks of identity theft and tracking.

The ability-based approach provides a third option: rapid emergency verification without persistent surveillance infrastructure. Children would be enrolled with strong parental consent and judicial oversight into systems that bind their identity to non-exportable capabilities rather than stored biometric templates. During emergencies—missing child situations, natural disasters, medical crises where children are separated from guardians—authorized responders could verify a child's identity through brief, supervised capability demonstrations that leave no permanent records.

This system addresses several critical requirements simultaneously. Speed: verification can happen in seconds rather than hours, which can be life-saving in emergency situations. Privacy: no biometric templates are stored that could be stolen or misused for tracking. Accuracy: mathematical properties ensure that false positives (wrong child identified) and false negatives (right child not recognized) are extremely rare. Auditability: every verification attempt creates tamper-evident logs that can be reviewed by oversight authorities without exposing the underlying identity mechanisms.

The policy framework surrounding such systems would require extraordinary safeguards. Access would be limited to specific emergency scenarios with judicial oversight. Enrollment would require informed consent from both parents or guardians, with clear opt-out mechanisms as children reach age of majority. Usage logs would be subject to regular audit by child protection advocates and privacy organizations. The technology would be designed to automatically purge records after specified periods and prevent function creep into routine surveillance applications.

When implemented with appropriate safeguards, this approach can dramatically improve outcomes for children in crisis situations while avoiding the surveillance infrastructure that traditional biometric approaches require. By making rapid identification possible without persistent data storage, it helps reunite families faster, prevents trafficking, and supports medical care for children who cannot communicate their medical history—all while maintaining the privacy protections that children deserve in their digital lives.

D. Banking and payments

Because "pass" derives ephemeral keys and "fail" leaves nothing behind, banks can gate high-risk steps (wire releases, role changes, recovery flows) behind micro-ceremonies rather than OTP codes that can be phished in real time. This reduces fraud and operational pain while making approvals harder to coerce or replay.

Financial fraud represents the fastest-growing category of cybercrime, with losses exceeding $16 billion annually in the United States alone. The vast majority of these attacks exploit weaknesses in authentication systems: stolen passwords, intercepted SMS codes, social engineering attacks that convince victims to share verification codes, and sophisticated phishing operations that capture credentials in real time. Traditional financial security measures often create as many problems as they solve—complex password requirements that customers cannot remember, multiple authentication steps that create friction and abandonment, recovery processes that themselves become attack vectors.

The ability-based approach fundamentally changes the economics of financial fraud by eliminating the reusable artifacts that make current attacks profitable. When a customer authorizes a high-risk transaction—a large wire transfer, a change to their account recovery settings, the addition of a new payee—they perform a brief capability demonstration rather than entering a code that could be intercepted or coerced. The mathematical properties of these demonstrations ensure that even perfect recordings cannot be replayed to authorize different transactions.

This approach addresses several pain points that plague current banking security. First, it eliminates SIM-swapping attacks, where criminals transfer a victim's phone number to a device they control to intercept SMS-based authentication codes. Second, it defeats real-time phishing operations where attackers position themselves as intermediaries between victims and legitimate banking websites, capturing and forwarding authentication codes as they're entered. Third, it prevents coercion attacks where criminals physically threaten victims to obtain authentication codes or passwords.

For banks, the operational benefits extend beyond fraud reduction to improved customer experience and reduced support costs. Customers no longer need to remember complex passwords, manage authentication apps, or navigate lengthy recovery processes when they lose access to their devices. Support calls related to locked accounts, forgotten passwords, and stolen devices drop dramatically. The streamlined authentication process reduces abandonment rates for high-value transactions that currently fail due to authentication friction.

The system scales naturally across different risk levels and transaction types. Low-risk activities like checking account balances might require no additional authentication beyond initial login. Medium-risk activities like standard transfers between known accounts might require single-round demonstrations. High-risk activities like setting up new payees or large international transfers might require multi-round demonstrations with additional verification steps. This risk-based approach balances security with usability while providing clear audit trails for regulatory compliance and fraud investigation.

E. Everyday platforms and operating systems

Embedding the primitive at the OS level (across macOS, Windows, Linux/Unix, Android, iOS, Raspberry Pi) means apps don't have to reinvent identity. They can rely on the platform for "actor-bound, non-replayable proofs," improving consistency and shrinking the attack surface across devices people use all day.

The fragmentation of digital identity across countless apps and services creates both security vulnerabilities and user experience nightmares. Every application that implements its own authentication system becomes a potential attack vector, and users must navigate dozens of different login processes with varying security requirements and usability characteristics. This fragmentation also creates inconsistent security postures—users might have strong authentication for their banking app but weak passwords for applications that access equally sensitive information.

Operating system-level integration of ability-based authentication solves these problems systematically. When the capability demonstration primitive is built into the OS identity subsystem, applications can leverage sophisticated authentication without implementing complex cryptographic systems themselves. This follows the successful model of how OS-level integration of features like encryption, secure storage, and biometric recognition has improved both security and usability across entire application ecosystems.

For users, OS-level integration means consistent, predictable authentication experiences across all their applications. Instead of managing separate passwords, authenticator apps, and recovery processes for each service, users perform familiar capability demonstrations that work the same way regardless of which application is requesting authentication. This consistency reduces cognitive burden and eliminates many of the user errors that lead to security compromises.

For developers, OS-level primitives dramatically lower the barriers to implementing strong authentication. Instead of choosing between easy-to-implement but insecure password-based systems and complex but secure cryptographic approaches, developers can simply request identity proofs from the OS and focus on their application's core functionality. This democratizes access to state-of-the-art authentication technology, ensuring that even small applications can provide enterprise-grade security.

The security benefits compound across the entire device ecosystem. When authentication is handled at the OS level, the attack surface shrinks from dozens of application-specific implementations to a single, well-audited system component. Security updates improve authentication across all applications simultaneously rather than requiring individual apps to patch their authentication systems. Malware that targets application-specific credential storage becomes ineffective because credentials are handled by protected OS subsystems rather than individual applications.

Cross-platform standardization amplifies these benefits by enabling consistent identity experiences across different devices and operating systems. When a user switches from an iPhone to an Android device, or from a Windows laptop to a MacBook, their authentication experience remains familiar and their security posture remains strong. This reduces the learning curves and security gaps that currently occur when users move between platforms with different authentication paradigms.

F. Personal, private AI agents (and autonomous tools)

Automation is useful, until a bot does something you didn't intend. Agent actions should be gated by the same non-exportable identity primitive as humans use. That yields agents with strictly scoped permissions, bounded lifetimes, and verifiable presence at each sensitive step. You get convenience without handing a general-purpose robot permanent, replayable credentials.

The rapid proliferation of AI agents and automated tools creates unprecedented challenges for digital identity and access control. Traditional approaches to service authentication—API keys, OAuth tokens, service account credentials—were designed for simple, predictable automated tasks. But modern AI agents can perform complex, context-dependent actions that blur the line between human and machine decision-making. When an AI agent manages your calendar, responds to emails, makes purchases, or controls smart home devices, the consequences of compromised or misbehaving automation can be severe.

Current credential-based approaches to agent authentication create several serious problems. First, they typically involve long-lived, high-privilege tokens that become attractive targets for theft. When API keys are stolen, attackers gain persistent access to all the services and data that the agent was authorized to access. Second, they provide little granular control over agent behavior—agents are either fully authorized to act on your behalf, or they're not authorized at all. Third, they create audit nightmares because it's difficult to distinguish between legitimate automated actions and actions taken by attackers using stolen credentials.

Ability-based authentication fundamentally changes how we think about agent authorization. Instead of giving agents permanent credentials that grant broad access to your digital life, agents would demonstrate capability for specific actions at the moment they need to perform them. An AI calendar agent would prove its authorization separately for each meeting it schedules. A shopping agent would demonstrate permission individually for each purchase it makes. A home automation agent would verify authorization for each device command it executes.

This approach enables much more granular control over automated actions. You can authorize an agent to make purchases up to a certain dollar amount, schedule meetings only during business hours, or control only specific categories of smart home devices. These restrictions are enforced cryptographically rather than through policy files that can be modified by compromised systems. When you change your mind about an agent's permissions, you can revoke its authorization immediately without having to track down and rotate all the credentials it might have cached.

The audit and accountability benefits are equally important. Every agent action is tied to a specific, time-scoped authorization that can be traced back to you without revealing your ongoing ability to authorize similar actions. This creates clear audit trails for agent behavior while protecting your privacy and maintaining your ability to revoke agent permissions quickly when needed. If an agent starts behaving unexpectedly, you can immediately see which specific authorizations enabled that behavior and revoke them without affecting the agent's ability to perform other, still-desired tasks.

For the broader AI ecosystem, this approach helps address growing concerns about AI safety and alignment. When AI agents must continuously demonstrate authorization for their actions rather than operating with broad, persistent permissions, they become naturally more constrained and auditable. This doesn't solve all AI safety problems, but it provides important guardrails that help ensure AI agents remain tools that enhance human agency rather than autonomous actors that replace human decision-making.

How it works at a glance (no jargon required)

Think of the ceremony as a quick, timed handshake with three important properties:

  1. Freshness: Each login attempt is tied to "now", a time label and fresh randomness. Yesterday's transcript is cryptographically useless today.

  2. Private comprehension: You (or your device) hold a private "map" that makes the target in the projection obvious to you but meaningless to outsiders. Your answer is masked so the system can verify it without learning your secret.

  3. One-shot capability: If all rounds check out, an ephemeral key is derived to protect the session. If not, no key exists to steal later.

Imagine the system shows you a grid of colored shapes that looks random to everyone else, but because you hold a secret "decoder," one specific region lights up for you like a beacon. You indicate that region without revealing why you chose it. The system can verify your choice is correct without learning your decoder pattern. Even if someone records everything perfectly, they can't use that recording tomorrow because the grid will be completely different.

This is intentionally carrier-agnostic: the same proof logic can be delivered visually, by sound, or by touch. That improves accessibility (e.g., for low-vision users) and resilience (switch the modality if one channel is under attack), while keeping the math and security guarantees identical. Audio versions might present tone sequences where you identify harmonic patterns. Haptic versions could use vibration patterns that resonate with your stored tactile map.

The beauty lies in its simplicity for legitimate users: the correct answer feels obvious and immediate, requiring no conscious effort to remember passwords or retrieve devices. For attackers, the task becomes impossibly difficult because they lack the private context that makes the answer apparent. This asymmetry—easy for you, hard for everyone else—creates security that feels natural rather than burdensome.

Why this is safer than "just use a better password"

Classic defenses (passwords + codes + "approve" taps) keep failing for familiar reasons:

  • Replay & relay: Attackers forward your code in real time, or replay a captured approval.

  • Biometric spoofing: Faces and voices are widely deepfaked; templates can be stolen and reused.

  • Model-in-the-middle: A malicious "assistant" or proxy can rewrite flows and misbind your intent.

The fundamental problem with traditional approaches is that they all rely on transferring proof from you to the system—passwords travel over networks, biometric templates get stored in databases, authentication codes move through SMS or apps. Each transfer creates an opportunity for interception, each storage location becomes a target for theft, each transmission can be recorded and potentially replayed.

The proposed approach changes the ground rules:

  • No exportable secret: There's nothing for phishers to harvest because the "thing that proves you" never leaves your head or the device's sealed core.

  • No reusable token: A recording of a successful session can't be played back, fresh time and randomness break the mapping.

  • Bound to policy: The proof can bind to a specific action ("approve this wire"), making "convincing text" inert unless accompanied by a pass for the right policy.

Consider the difference: when you type a password, you're sending your secret to the system. When you demonstrate capability, you're proving you can perform a task without revealing how. It's like the difference between giving someone your house key versus demonstrating that you can unlock your door. The key can be copied; the demonstration cannot.

This shift defeats entire categories of attack that have plagued digital security for decades. Social engineering becomes much harder because there's nothing to extract that works later. AI-generated phishing becomes less effective because convincing text cannot authorize actions without live demonstration. Even sophisticated man-in-the-middle attacks fail because there are no reusable credentials to capture and forward.

That's the heart of the "silver bullet" claim: reduce credential theft by removing credentials from the wire and reduce replay by making every pass unique in time.

Ethics, governance, and practical safeguards

A security primitive is only as good as its deployment discipline. History teaches us that powerful technologies can serve human flourishing or undermine it, depending on how they're governed and deployed. The difference lies not in the technology itself but in the institutions, policies, and cultural norms that shape its use.

Sensible guardrails include:

  • Data minimization by design. Store only commitments, timestamps, and acceptance results, not raw selections, not biometrics, and not secrets. (The verifier needs to know that you passed, not how you looked doing it.)

  • Accessibility as a first-class requirement. Provide the same proof logic via multiple modalities (visual, audio, haptic), so people can choose how to interact without losing security.

  • Tunable assurance. Let organizations pick the number of rounds $h$ and acceptance thresholds for high-risk steps, and relax them for everyday use, just like you already vary 2FA prompts by risk.

  • Transparent policy binding. Every privileged operation should hash a policy label (e.g., "release payment #123") into the proof so intent can't be silently swapped by compromised UI layers.

  • Revocation without fallout. Because there are no long-term secrets at rest, revoking a device or agent is as simple as revoking its private map. No mass password resets; no cascading breakage.

  • Public-interest carve-outs. Elections support, child-safety use, and crisis response should be offered at cost or free, with strict auditing and consent, so the tech lifts the baseline for everyone, not just those who can pay.

Equally important are governance mechanisms that prevent function creep and ensure accountability. Independent auditing bodies should verify that implementations actually follow data minimization principles. Open-source reference implementations should enable public scrutiny of security claims. Clear legal frameworks should define when and how these systems can be used, with meaningful penalties for misuse. Most crucially, users should retain ultimate control over their participation, with genuine alternatives available for those who choose different approaches to digital identity.

What changes for different audiences

For people. Short, repeatable micro-proofs replace typing codes or memorizing complex passwords. You get fewer scary prompts, less "approval fatigue," and protection that scales with the stakes of the action you're taking. The daily experience becomes more like using your existing devices—natural, quick gestures that feel intuitive rather than security theater. Password-related support calls disappear. Account recovery becomes straightforward without compromising security.

For companies and platforms. Centralized credential stores, magnet targets for attackers, shrink. Incident response improves because there are fewer durable artifacts to rotate and fewer replayable tokens floating around. Customer support costs drop as password reset requests become obsolete. Compliance becomes easier because sensitive authentication data simply doesn't exist to be breached or regulated. Liability decreases because user credentials can't be stolen from company databases that no longer need to store them.

For governments and public services. You can push trust to the edge: the person (or the attested device/agent) proves presence in the moment, with evidence that audits well and resists replay. That raises public trust without demanding intrusive data collection. Digital government services become more accessible because citizens aren't excluded by complex password requirements or expensive authentication devices. Election security strengthens while privacy protections increase. Crisis response improves because identity verification works even when normal infrastructure is compromised.

For journalists and media platforms. Signed work product becomes normal and verifiable, enabling better ranking and distribution choices without deputizing platforms as censors of content. The emphasis is on provenance, not policing. Newsroom security increases because credentials can't be stolen to publish false stories. Reader trust grows because content authenticity becomes verifiable. Independent journalists gain access to the same credibility tools as major news organizations.

For AI ecosystems. Helpful agents become safer operators when every sensitive act requires a live, non-replayable pass anchored to their registered code and scope, no free-floating keys that a rogue process can siphon and reuse.

Frequently asked "but what about…?" questions

Q1: Won’t attackers just record the whole session and play it back? No. The acceptance depends on this session’s time and randomness. A perfect video doesn’t help because tomorrow’s projection is different and the masked answers won’t line up. (That’s the point of binding every pass to “now.”)

Q2: What if someone controls the screen and clicks for me? If they don’t hold your private map (or the device’s sealed one), they see the same projection you do, but without the advantage. They can guess, but the odds drop off as $(1/6)^h$. “one over six to the h.”

Q3: I heard biometrics are easy. Why not just use my face? Biometrics are convenient as UI, not as secrets. Templates leak, faces and voices are spoofed. The model here lets you use any convenient carrier (camera, audio, haptics) without turning the trait itself into a token someone can steal.

Q4: What happens if a device is compromised? If the sealed map (the device’s “private geometry”) is protected, compromise of the renderer becomes a denial-of-service problem, not an impersonation vector. If the sealed core itself is suspected, revoke it: future derivations stop. There is no “master password” to change across dozens of services.

Q5: Isn’t this too complex for most users? The ceremony takes seconds and is repeatable. In user tests, the target “pops out” quickly for the legitimate person, while attackers who lack the map don’t get the same perceptual advantage. The entire flow is designed to be cognitively light and accessible by multiple modalities.

A practical rollout plan (from talk to action)

  1. Start at the edges that hurt most. Gate wire approvals, password resets, and admin role changes behind micro-proofs. Replace SMS/OTP with "prove-and-derive." These high-risk, low-frequency actions provide ideal testing grounds because users expect additional security steps, the financial impact of improvement is measurable, and failure modes are contained. Begin with enterprise customers who can provide structured feedback and have dedicated support resources to manage the transition.

  2. Make it a platform primitive. Ship the capability in OS identity APIs so app teams consume it like they consume biometrics today, without custom crypto. This requires coordination with major platform vendors—Apple, Google, Microsoft—to integrate the primitive into their identity frameworks. Provide comprehensive SDKs, clear documentation, and backward compatibility bridges so developers can adopt incrementally. The goal is making strong authentication as easy to implement as password authentication, removing technical barriers that currently favor insecure approaches.

  3. Offer a public-good track. Elections support, child-safety flows, and critical public services get a subsidized or free tier with strict policy and audit. Establish partnerships with electoral commissions, child protection agencies, and public health organizations to demonstrate the technology's value in high-stakes scenarios. These implementations serve as credibility anchors, showing that the approach works when it matters most while building public trust in the underlying technology.

  4. Bind proofs to policy. Make each privileged action carry a short, human-readable label that's hashed into the acceptance. That way, UI tricks can't flip meanings behind the scenes. Implement standardized policy description formats that survive translation across languages and platforms. This prevents attackers from exploiting user interface manipulation to trick people into authorizing unintended actions—a crucial protection as social engineering becomes more sophisticated.

  5. Treat adoption as change management, not a toggle. Pilot, measure fraud reduction and support burden, expand by cohort, and publish metrics to earn trust. Successful deployment requires careful attention to user experience, staff training, and system integration. Plan for 12-18 month rollout cycles with multiple feedback loops, user testing, and gradual expansion across user populations. Publish regular transparency reports showing security improvements and user satisfaction metrics to build broader ecosystem confidence.

Why this approach aligns incentives

  • Users get fewer hoops and better safety. The elimination of password management, the reduction in approval fatigue, and the intuitive nature of capability demonstration create immediate quality-of-life improvements. Users no longer lose access to accounts because of forgotten passwords or lost authentication devices. The approach scales naturally with risk—low-stakes actions require minimal interaction while high-stakes actions get proportional protection. Most importantly, users gain genuine security rather than security theater, reducing the background anxiety that comes from knowing their credentials could be compromised at any time.

  • Providers move away from high-value credential honey-pots. The business case is compelling: reduced liability from data breaches, lower customer support costs, improved user experience leading to higher engagement and lower churn. Companies no longer need to invest in elaborate credential protection infrastructure because there are no persistent credentials to protect. Incident response becomes faster and less costly because there are fewer systems to compromise and fewer credentials to rotate when breaches occur.

  • Regulators and auditors get precise, replay-resistant evidence of consent and presence. Compliance becomes more straightforward because authentication events are cryptographically verifiable and bound to specific actions and timeframes. Audit trails are cleaner and more meaningful because they capture actual authorization decisions rather than just credential presentation. Privacy regulations become easier to implement because personal data collection for authentication purposes drops dramatically.

  • Attackers lose their easiest path (credential theft and replay) and must attempt much noisier, rarer attacks. The economic model of cybercrime depends on scale—techniques that work against millions of users simultaneously are profitable, while approaches that require individual targeting are not. By eliminating reusable credentials, the approach forces attackers toward more expensive, detectable, and legally risky attack methods. This shifts the entire risk-reward equation in favor of defenders.

This is what "secure by default" looks like when translated into day-to-day operations: the easiest path for the honest user becomes the hardest path for the attacker. When legitimate use becomes frictionless and illegitimate use becomes prohibitively difficult, security serves human flourishing rather than constraining it.

The moonshot mindset: big problem, doable steps

Yes, the challenge is huge. It spans code, design, policy, and habits. But the path forward is incremental and measurable. The moonshot metaphor isn't about attempting everything at once—it's about maintaining an ambitious vision while taking concrete steps that compound over time. Each small victory creates momentum for larger transformations, and each successful implementation provides evidence that motivates the next phase of adoption.

  • Replace a handful of brittle OTP steps with micro-proofs. Start with the most painful authentication points where users and organizations feel immediate relief. Financial institutions gating wire transfers, healthcare systems protecting patient records, cloud platforms securing administrative access—these environments have both high stakes and sophisticated users who understand the value proposition. Early wins in these sectors create case studies and reference implementations that lower adoption barriers for everyone else.

  • Prove the fraud drop and the support savings. Measurement is crucial because security often involves proving a negative—attacks that didn't happen, breaches that were prevented, costs that were avoided. Establish baseline metrics before implementation and track key indicators rigorously: authentication failure rates, support ticket volume, account takeover incidents, user satisfaction scores. Publish these results transparently to build confidence in the approach and demonstrate return on investment.

  • Push the primitive into OSes and SDKs so developers can "just use it." The transition from specialty security technology to ubiquitous infrastructure follows a predictable pattern. Complex cryptographic operations become simple API calls. Cutting-edge features become standard capabilities. Platform integration is what transforms experimental technology into everyday tools that millions of developers can use without becoming security experts themselves.

  • Backstop public-interest use cases where trust matters most. Election security, child protection, crisis response, journalism verification—these applications justify significant investment because the social value extends far beyond direct economic returns. Success in these domains creates public trust and political support that enables broader adoption.

Do that, and "privacy as a lived reality" becomes more than a settings screen. It becomes the everyday experience of using technology without the background stress that something you typed yesterday will be used against you tomorrow. That transformation, from anxiety to confidence in our digital tools, represents the true moonshot: technology that enhances human agency rather than undermining it.

One page you can share internally

The problem: Trust online is brittle because we ship and store credentials that attackers can intercept or replay. Fraud and fatigue result. Current authentication approaches create systemic vulnerabilities: password databases become attractive targets, SMS codes can be intercepted, biometric templates can be stolen. The $16 billion annual fraud loss represents just the visible damage—the hidden costs include customer abandonment, support overhead, and erosion of digital trust that constrains innovation and adoption.

The shift: Don't export secrets. Ask people (or their attested agents) to prove a live, private ability on a short, time-keyed challenge. If they pass, derive an ephemeral key for the session. If they fail, nothing leaks. This eliminates the fundamental vulnerability of reusable credentials while creating user experiences that feel more natural than current password-based flows. The mathematical foundation ensures that legitimate users find the task obvious while making random guessing astronomically unlikely.

The wins: Fewer credential thefts, fewer replays, better consent evidence, more accessible flows (visual/audio/haptic), and simpler incident response. Quantifiable benefits include dramatic reductions in account takeover incidents, elimination of password-related support tickets, improved conversion rates on sensitive transactions, and reduced liability from credential-related breaches. User experience improves because people no longer manage dozens of passwords or navigate complex recovery processes. Compliance becomes easier because authentication events provide cryptographic proof of user presence and intent.

Where to start: Replace OTP on high-risk actions; prove the fraud-cut; integrate at the platform layer; and subsidize public-interest uses (elections, child safety, journalism provenance). Begin with scenarios where users already expect additional security steps, measure the impact rigorously, and expand based on demonstrated results. Partner with platform vendors to make the technology as easy to implement as existing authentication methods. Focus early implementations on use cases with clear social value to build public trust and regulatory support.

ROI timeline: Pilot programs typically show measurable fraud reduction within 60 days. Platform integration enables broader deployment within 6-12 months. Network effects accelerate adoption as more services support the approach, creating a virtuous cycle where security improves while friction decreases.

Closing

Security is not a product you buy once; it is a posture that pays off every day. The ENI6MA approach reframes identity around what cannot be faked cheaply: the ability to pass a brief, time-keyed proof that reveals nothing reusable. That's how you rebuild trust at human scale, by making the right thing easy for people and expensive for attackers.

This transformation represents more than a technical upgrade—it embodies a different philosophy about the relationship between technology and human dignity. Current systems treat every user as a potential threat who must be continuously verified, monitored, and controlled. The resulting architecture of suspicion corrodes social trust and constrains human potential. When people retreat from digital engagement because they cannot trust the systems that mediate it, everyone loses: individuals miss opportunities for connection and growth, organizations cannot reach their audiences effectively, and society fails to realize the collaborative possibilities that digital technology should enable.

The alternative vision emerging from this work suggests that technology can serve human flourishing rather than undermining it. When authentication systems preserve privacy while providing security, when identity verification enhances agency rather than constraining it, when digital trust becomes robust rather than brittle—in these conditions, technology becomes genuinely empowering rather than subtly oppressive.

The moonshot framing is essential because incremental improvements to fundamentally broken systems cannot address the scale and urgency of current challenges. We need breakthrough innovations that change the game entirely, making the most profitable attack vectors unprofitable, the most common vulnerabilities obsolete, and the most complex security tasks intuitive for ordinary users.

As we pursue this as a moonshot, ambitious, collaborative, and focused on public good as well as private value, we can make the internet feel safe again without demanding that ordinary people become security experts. This vision—of digital systems that serve human dignity and democratic values—is not utopian fantasy but achievable reality. The technical foundations exist, the economic incentives align, and the social need is urgent.

What remains is the collective will to choose better systems over familiar ones, to prioritize long-term flourishing over short-term convenience, and to build technology that reflects our deepest values rather than our worst fears. That's a future worth shipping.



The Global Stakes of Cybersecurity, and Why Solving Password/Credential Theft Is the Fastest Way to Bend the Risk Curve

Audience: journalists and editors, undergraduate students, investors, and technically adept readers. This essay uses plain language, with sidebars that tailor the takeaways for each group. Where helpful, I show tiny, transparent bits of math with readable LaTeX and add a one-line text-to-speech (TTS) gloss so it works for voice readers too.


The Big Picture: Cybersecurity Is Now a Public-Interest Problem

Cybersecurity has graduated from an “IT issue” to a societal risk. Power grids, hospitals, elections, small businesses, supply chains, and every smartphone in a teenager’s hand depend on software, networks, and cloud identity systems. When those identity systems fail, the harm is no longer abstract:

  • Patients are rerouted from trauma centers.

  • Funds vanish from small-business accounts.

  • School districts and municipalities go offline.

  • Disinformation piggybacks on hacked media accounts.

  • Research data and AI models are stolen or poisoned.

The numbers are not subtle. The FBI’s Internet Crime Complaint Center (IC3) logged $16 billion in reported losses for 2024, up 33% year over year, across 859,532 complaints. That is just what was reported; the real economic drag is larger. (Federal Bureau of Investigation) The World Economic Forum places cyber risk high in global risk rankings, with cyber espionage and warfare in the near-term top tier as technology and geopolitics intertwine. (World Economic Forum)

From a macro lens, this is an investment story as much as it is a security story: avoidable cyber loss is a tax on growth. It suppresses digital adoption, misallocates capital toward incident response, and erodes trust in online services. For investors, that’s margin compression; for the press, it’s a systemic trust issue; for students and engineers, it’s a design and implementation challenge we can actually improve.


Root Cause: Credentials Are the Soft Underbelly

If you remember one sentence from this essay, remember this: the cheapest path to most modern breaches still runs through stolen or misused credentials. In Verizon’s 2025 Data Breach Investigations Report (DBIR), stolen credentials dominate Basic Web Application Attacks, showing up in about 88% of cases within that pattern. (Verizon) That is the front door of the internet.

Corroborating the same pressure point, Microsoft observes that more than 99.9% of compromised accounts lacked MFA, almost all of these accounts were protected by passwords alone, making them soft targets for password reuse, phishing, and password-spray automation. (Microsoft Learn) And when the FBI tallies those $16B in annual losses, a substantial share traces back to credential-centric scams like Business Email Compromise (BEC), which weaponizes account takeover against the trust you place in your colleagues and vendors. (Federal Bureau of Investigation)

Why credentials fail in practice

  • They are reused across sites and services.

  • They are phished with near-industrial efficiency.

  • They are bought in bulk via infostealer logs.

  • They are replayed or relayed in real time by adversaries-in-the-middle.

  • They often live too long and propagate across tools, scripts, and integrations.

No amount of scolding users about “stronger passwords” can reverse these structural incentives. The problem is not the human, it’s the artifact. A password is a reusable secret; reusable secrets are trivially copied; copied secrets are monetized at scale.


The New Twist: AI Supercharges Both Sides

AI has put power tools in everyone’s hands, defenders and attackers alike. IBM’s 2025 Cost of a Data Breach analysis flags the rise of AI-related security incidents, including “shadow AI” (unsanctioned use of AI tools) that adds measurable cost to breaches and lowers detection quality. (IBM) Media coverage echoes the pattern: AI-generated phishing and deepfakes are accelerating social engineering, compressing the time it takes criminals to craft convincing lures from hours to minutes. (IT Pro) WEF’s risk outlooks likewise tie cyber instability to the rapid adoption of AI amid geopolitical friction. (World Economic Forum Reports)

Good news: AI also helps defenders find anomalies faster and filter noise. But the structural problem remains: as long as reusable credentials are the coin of the realm, AI will keep minting better forgeries, faster.


The Money: How Big Is the Damage, Really?

Let’s anchor to two complementary sources:

  • FBI IC3 (2024): $16B in reported internet crime losses, +33% YoY. (Federal Bureau of Investigation)

  • IBM Cost of a Data Breach (2025): global average cost $4.44M per breach; U.S. average $10.22M, the latter an all-time high, driven by fines, response, and detection costs. (IBM)

A simple way to reason about the economics is an expected-loss model. Suppose:

  • pb = annual probability of a breach,

  • Cb = average cost if breached (in dollars),

  • Cm = annual cost of a mitigation (e.g., phishing-resistant MFA rollout),

  • pb = breach probability after mitigation.

The expected annual loss without vs. with mitigation is:

E[L0]=pbCb{E}[L_0] = p_b \cdot C_b
  • Before: . “Expected loss equals p sub b times C sub b.”

E[L1]=pbCb+Cm {E}[L_1] = p_b' \cdot C_b + C_m
  • After: . “Expected loss equals p prime sub b times C sub b plus C sub m.”

The return on investment (ROI) of the mitigation is:

ROI  =  (E[L0]E[L1])Cm  =  (pbpb)CbCmCm.\mathrm{ROI} \;=\; \frac{\big(\mathbb{E}[L_0] - \mathbb{E}[L_1]\big)}{C_m} \;=\; \frac{(p_b - p_b')\,C_b - C_m}{C_m}.

“ROI equals the reduction in expected loss divided by the cost of mitigation.”

Because credential-theft is a dominant initial access vector, even modest reductions in pb from eliminating password-only flows tend to pay for themselves when Cb runs in the multimillion-dollar range. This is why insurers, regulators, and boards are now pressing specifically for phishing-resistant authentication, solutions that are not just “multi-factor,” but resistant to real-time relay and replay. CISA’s published materials are blunt: only FIDO/WebAuthn (and PKI smartcards) meet phishing-resistant criteria; authenticator apps and SMS codes remain phishable. (CISA)


Credential Theft Mechanics, In Plain Language

Phishing succeeds because it pushes you into logging in on the attacker’s page (or through their proxy), then reuses what you typed. Credential stuffing succeeds because people reuse passwords, and attackers automate logins using breach corpuses. Password sprays succeed because many enterprise accounts still follow guessable patterns. Infostealer malware exfiltrates saved passwords and session cookies, bypassing the login ceremony entirely.

For web apps, the DBIR’s “Basic Web Application Attacks” category, where you’d expect logins to be the choke point, confirms the obvious: credentials are the lever (again, ~88% involve stolen creds within that pattern). (Verizon)

Tiny risk calculation to build intuition If a login flow runs h short challenge rounds and a random guesser’s chance per round is 1/k, the pass probability from pure guessing is:

Pguess  =  (1k)h.P_{\text{guess}} \;=\; \left(\frac{1}{k}\right)^{h}.

“P guess equals one over k to the h.”

This is the logic behind phishing-resistant factors: they don’t rely on replay-able strings (passwords, codes); they rely on possession-bound, origin-bound cryptography so an attacker cannot “relay” your response to a different site or reuse a recorded interaction tomorrow.


The Fix That Works: Phishing-Resistant MFA and Passwordless Sign-In

Standards bodies and agencies increasingly speak with one voice:

  • NIST SP 800-63-4 (2025 final) sets expectations for digital identity, with explicit attention to phishing resistance at higher assurance levels. (NIST Computer Security Resource Center)

  • CISA repeatedly urges phishing-resistant MFA (FIDO2/WebAuthn or PKI), and explicitly warns that SMS and most app-based OTPs are still phishable. (CISA)

  • Microsoft reports that over 99.9% of compromised accounts had no MFA, a stark operational metric that continues to hold up at cloud scale. (Microsoft Learn)

On the adoption front, the trajectory is finally encouraging:

  • The FIDO Alliance reports passkey support at ~48% of the world’s top 100 websites, with rising consumer awareness and enablement. (FIDO Alliance)

  • Google made passkeys the default for personal accounts and continues to push the ecosystem; new features are even exploring automatic password-to-passkey upgrades where sites allow it. (blog.google)

  • Surveys and vendor telemetry point to steep increases in passkey creation and success rates, with smoother sign-ins than passwords and lower help-desk load. (Examples include Dashlane’s 2024 surge and Bitwarden’s figures.) (The Verge)

This convergence, clear guidance + real adoption, is unusual in security and should be seized. It means you can pick a track (platform passkeys, roaming security keys, enterprise FIDO sign-in) and be aligned with best practice, not just a vendor pitch.


Why This Matters to Four Different Audiences

For the Press (and Your Readers)

  • Story framing: Credential theft is the root cause, not just a detail. When you cover a hospital outage, a crypto theft, or a city ransomware event, trace how identity failed, was it passwords, OTP phishing, token theft, session hijack? Connecting the dots educates your audience and holds decision-makers accountable for controllable risk.

  • Context: Cite trusted sources: FBI IC3 loss figures; Verizon DBIR patterns; IBM’s breach-cost trend lines; CISA/NIST guidance on phishing resistance. (Federal Bureau of Investigation)

  • Public-interest lens: WEF’s risk framing helps readers see this as a governance and resilience issue, not only a tech story. (World Economic Forum)

For Undergraduate Students

  • Mental model: Think of a password as a copyable ticket. Anyone holding a copy gets in. A passkey (FIDO/WebAuthn) is more like a lock-and-key pair that only turns when you are truly at the right door (the origin) and holding the private key on your device. Replay is useless because the door challenges your key every time.

  • Career skill: Learn the web specs (WebAuthn), read NIST SP 800-63-4 sections on authenticators, and practice integrating passkeys in a demo app. It’s an employable skill with immediate impact. (NIST Computer Security Resource Center)

For Investors

  • Thesis: Enterprises that retire passwords in high-risk flows cut breach probability and support burden, easing cyber-insurance pressure and improving margins. The TAM for passwordless tooling spans consumer platforms, workforce identity, and machine/agent auth.

  • Signals: Look for vendors and platforms aligning with NIST/CISA terminology (phishing-resistant MFA), actual WebAuthn implementations, measurable help-desk ticket reductions, and selection-success metrics (login success up; fraud down). (CISA)

For Technically Adept Readers and Engineers

  • Design for origin binding. Use WebAuthn so the cryptographic exchange is bound to the relying party ID, that’s what kills phishing/relay at the protocol layer. (CISA)

  • Close weak fallbacks. After enrolling FIDO, disable SMS and legacy password fallbacks that attackers target. (CISA says this explicitly.) (CISA)

  • Scope and rotate properly. Use per-site keys (built into WebAuthn) and short-lived tokens. Adopt risk-based step-up with phishing-resistant challenges for wires, admin changes, and recovery.

  • Measure the curve. Track account-takeover attempts, credential-stuffing blocks, and help-desk call volume; model the post-MFA breach probability drop in your expected-loss formula from §4 and compute ongoing ROI.


A Short Tour of Phishing-Resistant MFA (What It Is, What It Isn't)

What it is: A login where the server challenges a private key that never leaves your device (phone, laptop TEE/TPM, hardware key). The signature is scoped to the site/app (origin), so an attacker’s look-alike site can’t mint a valid proof for the real one. This is FIDO2/WebAuthn, or PKI smartcards in government and regulated sectors. (CISA)

What it isn’t: SMS codes, email links, TOTP codes in authenticator apps, or “push to approve” buttons. Those are multi-factor, but generally not phishing-resistant; criminals can relay them in real time through adversary-in-the-middle kits. CISA and NIST draw this line plainly. (CISA)

Why it scales now: Browser, OS, and cloud support is mature; passkeys make the UX feel like “Face ID/Touch ID to log in,” with cross-device sync under your cloud account (Apple, Google, Microsoft), or portable hardware keys for higher assurance. Adoption is trending up at major consumer and enterprise services. (blog.google)


Case Studies and Signals from the Field

  • Public sector modernization: U.S. agencies have published success stories migrating toward phishing-resistant authentication even in legacy-heavy environments, one reason federal policy now emphasizes it. (CISA)

  • Cloud platforms: Microsoft’s 99.9% metric is not a slogan; it’s a data-center-scale observation that no-MFA accounts dominate compromise statistics. (Microsoft Learn)

  • Consumer ecosystem: Passkeys are now default in large ecosystems (e.g., Google accounts), and mainstream coverage highlights broader availability and conversion tooling. (blog.google)

Operational tip (engineering): Roll out passkeys in phases. Start with employees and admins, then high-risk customer segments (e.g., those performing payouts). Monitor: login success rate ↑, help-desk tickets ↓, ATO attempts blocked ↑.


Why the Credential Problem Is the Force Multiplier

If you could wave a wand and cut credential abuse in half, you would bend multiple breach curves at once:

  • Phishing would lose its easiest payoff.

  • Credential stuffing would fail by default.

  • Ransomware operators would have a harder time getting initial footholds via RDP/VPN or SaaS admin panels.

  • BEC would be starved of compromised accounts that lend it legitimacy.

This is not hypothetical. Verizon DBIR patterns, Microsoft’s compromise telemetry, and FBI loss data all point to identity as the first domino in many attacks. (Verizon) Knock over a different domino, say, “patch every vulnerability”, and you still leave a password-only login somewhere to be phished or replayed. Knock over credentials, however, and many of the most profitable criminal workflows stall.


Governance, Regulation, and Boardroom Imperatives

Boards and regulators are increasingly explicit:

  • NIST SP 800-63-4 (2025) codifies modern expectations for authenticator assurance, including phishing resistance and lifecycle management. (NIST Computer Security Resource Center)

  • CISA documentation and advisories push organizations to turn off weak fallbacks and prioritize FIDO/WebAuthn for targeted users and privileged access. (CISA)

  • Investors and asset owners (e.g., ESG discussions at Davos) now treat cyber resilience as financial risk management, noting AI-amplified threats and sector exposure. (Reuters)

For executive teams, that means three concrete policies:

  1. Set phishing-resistant MFA as a control objective (not “MFA” generically).

  2. Track identity control posture in board dashboards: the % of accounts with passkeys/keys, % with SMS fallback disabled, time-to-revoke for lost devices/keys.

  3. Demand vendor proof: ask your SaaS providers to demonstrate WebAuthn support and give you the toggles to disable legacy factors.


Plain-English Economics: Why This Pays for Itself

Revisit the expected-loss model from §4. Plug in realistic numbers:

  • Let Cb (breach cost) ≈ $4.44M globally, $10.22M in the U.S. average. (IBM)

  • Let Cm (first-year migration + support) be a fraction of that, especially at SaaS scale.

  • Assume phishing-resistant MFA drops identity-driven breach probability pb by even a few tenths of a percent at scale.

Because Cb is large, small deltas in pb have outsized impact on expected loss. This is why cyber insurers increasingly grant discounts or coverage improvements for phishing-resistant deployments. (Even when policy language is vague, underwriters understand the telemetry: credential-only estates are loss-leaders.)


Subtle Pitfalls, and How to Avoid Them

  1. Leaving weak fallbacks on. If you add passkeys but keep SMS/OTP as default fallbacks, attackers will target the fallback. Follow CISA’s advice: disable SMS post-enrollment and require phishing-resistant factors for privileged actions. (CISA)

  2. Account recovery gaps. Recovery flows often undermine your strong login. Treat recovery as a first-class risk: require phishing-resistant proofs, human-in-the-loop checks for high-value accounts, or out-of-band notarized steps for enterprise admins.

  3. Inconsistent coverage. Protecting the “front door” but leaving APIs, legacy VPNs, and service accounts with static passwords is common. Inventory services; use mTLS, workload identities, and keyless patterns where possible.

  4. Supply chain blind spots. Many recent incidents involve third-party vendor compromise leading to mass credential theft or token replay. Adopt continuous vendor security reviews and insist on origin-bound authenticators in the services you consume. Verizon’s 2025 page highlights rising third-party involvement, another reason to standardize on modern identity controls across your suppliers. (Verizon)


How AI Changes the Playbook (Without the Hype)

  • Attackers: Lures become hyper-personalized; deepfake audio/video increases authority mimicry; code-assistants speed exploit development.

  • Defenders: Anomaly detection improves; response playbooks auto-route; risk scoring gets sharper.

But note: AI does not change the math of replayable secrets. If a login flow accepts anything that can be copied and replayed later (a password, a code, an unbound token), then AI will help attackers obtain and replay it faster. If, instead, the login requires origin-bound cryptographic proof, there is nothing to steal that works elsewhere.


A Short Checklist You Can Use Tomorrow

For a newsroom or media platform

  • Require phishing-resistant MFA for CMS and social accounts; remove SMS fallbacks.

  • Track and periodically attest that newsroom accounts still have passkeys/keys enrolled, cuts impersonation risk and improves audience trust in verified posts.

  • Keep a recovery runbook that isn’t just “email a helpdesk and reset via link.”

For a university or department

  • Default to WebAuthn in single-sign-on; protect admin panels first.

  • Offer inexpensive roaming keys to research teams handling sensitive data.

  • Train: credential-theft drills are as valuable as phishing drills.

For a portfolio company (or your own firm)

  • Inventory logins; set a sunset date for passwords on all privileged access.

  • Prioritize high-risk flows (wires, payroll, vendor bank-change, admin role changes).

  • Measure: ATO rate, help-desk tickets, and % users with SMS disabled after enrollment.


Teaching Moment: Turning the Math Into Intuition

Return to the guessing probability:

Pguess=(1k)h(e.g., choose 1 correct region out of k across h rounds).P_{\text{guess}} = \left(\frac{1}{k}\right)^h \quad\text{(e.g., choose 1 correct region out of $k$ across $h$ rounds)}.

“P guess equals one over k to the h.”

  • Increase $k$ (more possibilities per round) or h (more rounds), and a random attacker’s success drops exponentially. In plain terms: a little more ceremony makes guessing astronomically unlikely.

  • Phishing-resistant MFA uses different math (public-key cryptography), but the same intuition applies: the attacker must be you at your device at the real site in that moment. Without the private key and origin binding, the request is worthless.

This is why “password + code” is not enough. It still lets attackers proxy the ceremony through them. Phishing-resistant methods deny that proxy outright.


Frequently Asked Questions, Answered Briefly

Q: We already have MFA. Isn’t that enough? A: It depends. Phishing-resistant MFA (FIDO/WebAuthn or PKI) is the target. SMS and TOTP can be phished/relayed. CISA and NIST explicitly make this distinction. (CISA)

Q: Is this too hard for users? A: Today, passkeys feel like unlocking your phone: Face ID or Touch ID and you’re in. Major ecosystems have made them the default and the success rates are higher than passwords. (blog.google)

Q: What about costs? A: The ROI is compelling when breach costs are in the millions (global average $4.44M, U.S. $10.22M), and when insurers/regulators favor phishing-resistant controls. (IBM)

Q: Can AI break passkeys? A: AI can help craft lures, but origin-bound cryptography isn’t fooled by style. Without the private key on your device and a correct origin, the login fails.


The Road Ahead: From Passwords to Proven Presence

A useful slogan for the next five years is “reduce replayable secrets.” That means:

  1. Eliminate password-only flows for anything valuable.

  2. Default to passkeys for employees and consumers.

  3. Harden recovery with phishing-resistant steps.

  4. Turn off weak fallbacks after enrollment (especially SMS).

  5. Extend strong identity to admin tools, APIs, and third-party vendors.

Do these, and you reduce the surface area criminals rely on. You also simplify your life: fewer help-desk tickets, fewer post-incident password resets, clearer audit trails.


Executive Summary (for Editors, Deans, CFOs, and Boards)

  • Impact: Cybercrime losses are large and growing; FBI logged $16B for 2024 alone (+33% YoY). (Federal Bureau of Investigation)

  • Root cause: Credential misuse is the fastest path into organizations; within web-app attacks, stolen creds show up ~88% of the time. (Verizon)

  • Cost: Average breach cost is $4.44M globally, $10.22M in the U.S., per 2025 IBM. (IBM)

  • Solution: Move from replayable secrets (passwords, SMS/TOTP) to phishing-resistant MFA (FIDO/WebAuthn, PKI). NIST SP 800-63-4 and CISA are aligned here. (NIST Computer Security Resource Center)

  • Momentum: Passkeys are now default in major ecosystems; ~48% of top sites support them; user success and adoption are rising. (blog.google)

  • AI context: AI accelerates phishing and deepfakes, but cannot defeat origin-bound cryptographic login; reducing credential theft blunts AI-driven social engineering at the root. (IT Pro)


Closing Thought: Trust Is a Design Choice

Security isn’t about making people memorize better strings. It’s about changing the shape of the problem so the easiest path for honest users (Face ID, Touch ID, a hardware key tap) is the hardest path for attackers. We have the standards, the tooling, and the evidence. What we need now is follow-through: retire passwords where it counts, remove weak fallbacks, and make phishing-resistant identity the default for people, devices, and agents.

When we do, the payoff won’t just be fewer headlines about breaches. It will be a quieter, more trustworthy internet, one where individuals, institutions, and innovators spend their time building, learning, and serving, not cleaning up the blast radius of stolen credentials.


Further reading and sources used above: Verizon DBIR 2025 site; IBM 2025 Cost of a Data Breach; FBI IC3 2024; WEF Global Risks reports; CISA guidance on phishing-resistant MFA and mobile best practices; Microsoft compromise telemetry; and FIDO Alliance adoption data. (Verizon)

Last updated