In my last post, commenter Randall Gamby notes that “of course [tokenization is encryption]. ” I wholeheartedly agree. But unfortunately the current PCI guidance does not, and cannot support this notion (and, because of this, people who build and/or implement tokenization cannot do so either without creating a tokenization catch-22). When we look at the PCI SCC statement on scope of encrypted data, we see the following:
“However, encrypted data may be deemed out of scope if, and only if, it has been validated that the entity that possesses encrypted cardholder data does not have the means to decrypt it.”
This takes us to a problematic decision point: if all tokenization is indeed encryption, having a tokenization system in house doesn’t reduce scope. If it is not, then even if you were to use a unique key per encrypted token you’d be functionally equivalent to random tokens but still fully in scope. Neither is sensible (and the proposed re-examination of encrypted tokens to carve out exceptions doesn’t make sense to me either under the scope guidance).
I much appreciated when the PCI SCC loosened its requirements around encryption key rotation. After all, is it really sensible to have to rotate backup tape encryption keys every year? But more change is needed – change to ensure that any reversible data transformation system is examined with the right level of architectural and algorithmic scrutiny (i.e., we can focus on what matters in practicality).