I built the PCC collection — 420 Pizza Connection Characters — using Claude for the contract work, the deployment pipeline, and the metadata structure. Not as a vibe exercise. As a real deployment on Polygon with real ownership and OpenSea visibility.
The contracts are live and verifiable. Pizza Connection Characters: 0x6774225402abEF5Aa34e80B8e7cbd99B61d8dd80. GHST token (Ghost in the Prompt): 0x81FAC24Af743F29Cd511b7f31D53b49b4f284E4e. PCC token: 0x592ae6258b1d8cC066375330Aa563B6681BBc5f1. All on Polygon, all verified. Check before you trust anything in this space, including this.
Here is what that actually looked like.
What AI Is Good For
The scaffolding is where AI compresses time dramatically.
Give Claude a clear spec — ERC-721, fixed supply, mint price, owner withdrawal, Polygon mainnet — and it produces a working Solidity contract in under a minute. Not a toy. A real contract with OpenZeppelin inheritance, mint functions, supply caps, and the boilerplate you would otherwise spend an afternoon copying from docs you half-understand.
The same is true for the Hardhat setup. Config file, deployment script, testnet run, Polygonscan verification — Claude walks through all of it. When the deployment errors out because your RPC endpoint is wrong or your gas config is stale, you paste the error back in and it diagnoses it. That loop — write, deploy, fail, paste error, fix, redeploy — runs fast when the model is doing the interpretation work.
Metadata was the other clear win. ERC-721 metadata has a specific JSON structure OpenSea expects. Trait formatting, image URI patterns, naming conventions — Claude knows the standard and generates it correctly. For 420 characters with attributes, that would have been a week of tedious JSON work. It was not.
Where It Would Have Burned Me
AI writes contracts with confidence whether or not the contract is secure.
The first draft Claude gave me used an older OpenZeppelin import path that had been restructured in v5. It compiled. It would have deployed. It would have worked. It also would have had a subtler ownership transfer pattern that a more careful reviewer would have flagged. I caught it because I knew enough to look. Someone who did not would have shipped it.
Reentrancy is the classic one. A basic mint function where the state update happens after the external call — AI will write that pattern without flagging it unless you specifically ask whether the contract is vulnerable to reentrancy attacks. When you ask, it will immediately tell you yes and fix it. It does not volunteer the information.
The same is true of integer overflow guards, unchecked math, and mint ceiling logic. AI builds what you describe. It does not red-team what it builds. Those are two different jobs and the model will not tell you it only did one of them.
The chain-specific configuration is where AI gets genuinely dangerous for people who do not know what they are looking at. Gas settings, RPC endpoints, block confirmations, Polygon-specific quirks — Claude has training data on all of it but that data has a cutoff and network configurations change. On testnet it does not matter. On mainnet with real money in the contract it does.
The Process That Actually Works
Start on testnet. Always. Mumbai is deprecated, use Amoy for Polygon testnet now. Claude will sometimes still reference Mumbai — correct it.
The working sequence:
1. Write the spec in plain language before touching Solidity. Supply, price, mint limit per wallet, withdrawal address, royalty if you want it. The cleaner your spec, the cleaner the contract. AI amplifies vagueness the same way it amplifies precision.
2. Have Claude write the contract. Read every function. You do not need to understand Solidity deeply to understand what each function claims to do. If a function is doing something you cannot explain in plain English after asking Claude to explain it, that is a flag.