Hey Logan, interesting take on ZKPs for AI verification. While I see the potential, I'm not fully convinced it's the best solution. Here's my perspective:
I think something like Twitter's Community Notes might be more practical for dealing with deepfakes. ZKPs sound promising, but I'm concerned about the data storage and compute resources they'd require. Is it really necessary (or feasible) to verify every AI image so intensively?
As AI-generated content becomes more common, I suspect we'll naturally become more skeptical of what we see online. While ZKPs could be one way for platforms to verify content, I wonder if simpler, less resource-intensive methods might emerge for everyday use. We might end up relying more on a combination of technological solutions and human judgment, like trusting certain individuals and organizations with solid track records.
Don't get me wrong, ZKPs could be useful for highly sensitive content. But for everyday stuff? Seems like overkill to me.
What are your thoughts on these concerns? Am I missing something about how ZKPs would work in practice?
Thanks for the feedback! I do agree that a combination of solutions, including but not limited to technical, will be required. That was one of my concluding thoughts but I must not have conveyed that clearly. Additionally I should have discussed the computational costs of running ZKPs. The computational cost of generating a "zero-knowledge circuit" (the foundation model's weights) is high. Therefore, if prospective benefits of implementing privacy-preserving verification techniques aren't high enough for a model company they won't implement them. This is a problem. However once a circuit is created, the ongoing computational costs of verifying the circuit are miniscule. The costs would be similar to querying a database.
Hey Logan, interesting take on ZKPs for AI verification. While I see the potential, I'm not fully convinced it's the best solution. Here's my perspective:
I think something like Twitter's Community Notes might be more practical for dealing with deepfakes. ZKPs sound promising, but I'm concerned about the data storage and compute resources they'd require. Is it really necessary (or feasible) to verify every AI image so intensively?
As AI-generated content becomes more common, I suspect we'll naturally become more skeptical of what we see online. While ZKPs could be one way for platforms to verify content, I wonder if simpler, less resource-intensive methods might emerge for everyday use. We might end up relying more on a combination of technological solutions and human judgment, like trusting certain individuals and organizations with solid track records.
Don't get me wrong, ZKPs could be useful for highly sensitive content. But for everyday stuff? Seems like overkill to me.
What are your thoughts on these concerns? Am I missing something about how ZKPs would work in practice?
Thanks for the feedback! I do agree that a combination of solutions, including but not limited to technical, will be required. That was one of my concluding thoughts but I must not have conveyed that clearly. Additionally I should have discussed the computational costs of running ZKPs. The computational cost of generating a "zero-knowledge circuit" (the foundation model's weights) is high. Therefore, if prospective benefits of implementing privacy-preserving verification techniques aren't high enough for a model company they won't implement them. This is a problem. However once a circuit is created, the ongoing computational costs of verifying the circuit are miniscule. The costs would be similar to querying a database.