Proving Personhood Without Handing Over the Keys
22 February 2026 · Arka Rai Choudhuri and Sanjam Garg
“On the Internet, nobody knows you’re a dog.” The famous 1993 New Yorker cartoon (Wikipedia) was a joke—but it pointed at a real problem. How do you know you’re interacting with a real human online? Or a human of the right age, with the right credentials? Today that question is sharper than ever.
Have you noticed more calls and messages from unknown senders—someone “just trying to help”? Often it’s a scammer or an ad-bot that sounds just like a human. We’re at the beginning of a reality where what you see or hear online may not be real. Are we prepared?
Early systems like CAPTCHA1 (Eurocrypt ’03) were built on a simple idea: give the user a task that’s easy for humans but hard for machines. For a while they worked. Today they’re essentially broken. LLMs and other ML tools can defeat CAPTCHAs at scale—for example, Akirabot2, an OpenAI-based tool, was used to hit 420,000 sites with spam by bypassing CAPTCHAs. We can no longer rely on humans being “smarter” than bots.
And personhood isn’t only “am I human?” Online we also need relationship-backed trust, reputation, membership in a community, or context-specific endorsements—age verification, attestations, “who vouches for you”—and all of that should preserve privacy. Yet many current approaches are ad-hoc and privacy-hostile. Mobile driver’s licenses (mDLs) can let the verifier query the DMV on every check, so the DMV can see and record every verification. Discord had over 70,000 government IDs stolen, then required government ID for “adult” content. Worldcoin’s global biometric system has been called a “privacy nightmare” and has faced shutdowns or strict limits in Spain, Bavaria, Hong Kong, and Kenya. So: how do we prove personhood without handing over the keys?
In this blog we explain our recent work3: we show how to construct proofs of personhood so that humans can prove their reputation and credentials online in a privacy-preserving way. The design is decentralized—no reliance on centralized parties for setup—and we use zero-knowledge (ZK) proofs so people can prove what they need without leaking the rest. This work contributes to the vision of the First Person Network4: a global infrastructure for real people and real trust, with no intermediaries. Below we walk through a typical protocol with a running example and highlight the key ideas and security requirements at each stage. No prior work considers the goal of constructing efficient zk proofs for this setting.
Stage I: Getting a Personhood Credential
Someone has to vouch that you’re a real person before you can vouch for others. We call these vouchers issuers: organizations everyone can look up and trust. They might be your local DMV, a passport office, or even a loyalty program. In practice, many personhood systems use government-issued IDs (passports, mobile driver’s licenses) as the “one person, one credential” anchor—and our protocol supports that.
Alex is a ride-share driver. Alex goes to a government authority (say, the DMV or a passport office) and gets a personhood credential (PHC) that attests to some facts about him—for example:
att = {
Name = Alex
Occupation = Driver
License type = Commercial
Date of birth = 1990-05-15
}
The authority has a public key that anyone can check. Alex provides his own public key and these attributes, proves he owns the provided key, and goes through the issuance flow. At the end, he holds a credential that he can verify himself using the authority’s public key. No one can fake that credential—that’s our first guarantee:
PHC Unforgeability. Credentials cannot be forged.
But wait—privacy. If Alex used the same public key with every issuer, then different issuers could link his requests together and learn more about him than he intended. We don’t want that. So we require:
PHC Receiver Unlinkability. Different issuers cannot link credentials issued to the same person.
The fix: Alex derives a different-looking public key for each issuer from his secret key and that issuer’s public key—so each key looks unrelated to the others. To each issuer he only shows this derived key; they can’t tell it’s the same person going to another issuer. He proves in zero knowledge that the derived key was computed correctly, without revealing his master secret.
A quick note on trust. How does the issuer know Alex’s details are real? In the real world, that’s the issuer’s job (e.g. the DMV checks your documents). For added robustness, we allow for the fact that issuers can sometimes be fooled: we attach a confidence parameter to each issuer. When someone later checks a credential, they can weigh who issued it. The protocol still keeps privacy and security even when we don’t assume every issuer is perfect.
Stage II: Vouching for Others (Verifiable Relationship Credentials)
Once you have a personhood credential, you can vouch for someone else in a way that others can check. We call these verifiable relationship credentials (VRCs). Think of them as signed, checkable endorsements. We refer to these VRC issuers as vouchers, to distinguish them from issuers, who only issue PHCs.
Back to our example. Carol is a passenger who just had a great ride with Alex. She has her own personhood credential (from the government or another issuer). She wants to endorse Alex by saying:
st = "Is a good driver."
She doesn’t want every endorsement to be linkable to the same “Carol.” So she uses a context-specific key for the ride-share setting. In fact, even issuers cannot link a PHC issuance with an endorsement. Different contexts (ride-share, work, social) get different keys, so no one can tie all her vouchers together.
Voucher Cross-Context Unlinkability. Your endorsements in one context (e.g. ride-share) cannot be linked to you in another context.
Alex has the same kind of privacy need: he doesn’t want every passenger who vouches for him to see that he’s the same driver across all of them. So he uses a fresh receiver key for this interaction, derived from Carol’s context key. Different passengers see different keys; they can’t link Alex across his VRCs.
VRC Receiver Unlinkability. Vouchers cannot link those VRCs to the same person (you) across different vouchers.
Carol only reveals what she chooses—for example, just her name—and the rest of her attributes stay hidden. She uses her credential to create a VRC bound to Alex’s receiver key and the statement “Is a good driver.” Anyone can check that the VRC is valid and that the keys were derived correctly (via zero-knowledge proofs), without learning Carol’s or Alex’s secrets. One more guarantee:
VRC Unforgeability. No one can create a valid endorsement without holding a real credential and going through the proper issuance flow.
Stage III: Proving Your Reputation (Without Revealing Everything)
Alex has collected several “good driver” endorsements from passengers. Now he wants to prove to someone—the ride-share platform, or a new passenger—that he has that reputation, without handing over every VRC he’s ever received. That would be slow and would leak more than needed. Instead, he produces a zero-knowledge proof: one short proof that backs up his claim.
Claim. Three different passengers vouched for me as a good driver.
The proof shows, in zero knowledge, that: (1) he holds three valid VRCs, (2) each says “Is a good driver,” (3) they come from three different vouchers (so not one person vouching three times), (4) each VRC was issued by a voucher with a valid PHC from a trusted issuer (or approved issuer set), and (5) all of the VRCs were issued to the same receiver, namely Alex, without revealing his secret key. He sends this proof plus the identities of the issuers (so the verifier can decide how much to trust them). The verifier checks the proof and can accept or reject the claim.
Privacy in this step: The verifier only learns the claim (“three passengers said I’m a good driver”) and whatever issuer info is needed to calibrate trust. They do not learn Alex’s full set of VRCs, his keys, or any attributes he didn’t choose to reveal.
Showing Privacy. The verifier learns only the claimed statement and the issuer information needed for trust, not the full set of VRCs or hidden attributes.
Wrapping Up
This post conveys the flow and the guarantees of our system without the full formal machinery. We refer the reader to our paper3 for the full definitions, constructions, and experiments.
In our paper we present an early prototype showing that zero-knowledge proofs for this setting can be practical. We leave further efficiency improvements for future work. Additionally, going forward, we aim to add support for legacy credentials in our system. We also want to support proving membership in a community where trust is defined by being directly connected to a trust anchor or to a sufficient number of other members in the community.
References
-
Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security. In Advances in Cryptology — EUROCRYPT 2003, pp. 294–311. Springer, 2003. https://link.springer.com/chapter/10.1007/3-540-39200-9_18 ↩
-
Akirabot — an OpenAI-based tool used to target 420,000 sites with spam by bypassing CAPTCHAs; see e.g. reporting on CAPTCHA-breaking attacks. ↩
-
Arka Rai Choudhuri, Sanjam Garg, Keewoo Lee, Hart Montgomery, Guru Vamsi Policharla, and Rohit Sinha. A Cryptographic Framework for Proofs of Personhood. PDF ↩ ↩2
-
The First Person Network. A global digital utility for trusted connections between individuals, communities, and organizations—no intermediaries, no platform, no surveillance. firstperson.network ↩