
As deepfake technology becomes more sophisticated, so do the scams it enables.
In just the first four months of 2025, 163 deepfake-related incidents were reported, leading to over $200 million in losses.
But experts believe that’s only a fraction of the real damage. Ken Jon Miyachi, founder of BitMind—a decentralised AI-powered deepfake detection platform built on the Bittensor Network—has been on the front lines of this growing threat.
In this interview with Invezz, Miyachi discusses the accelerating arms race between deepfake creators and detection tools, the regulatory response, and how BitMind is positioning itself to protect both corporations and everyday users from increasingly brazen attacks.
Invezz: The Q1 2025 Deepfake Incident Report cites 163 incidents in just the first four months, with over $200 million lost. From your vantage point at BitMind, is this just the tip of the iceberg?
Yes, this is just the tip of the iceberg.
There have been other reports citing over 580 incidents nearing $900M in losses throughout H1 2025, and there seems to be a statistical upward trend as deepfake scams are getting more sophisticated.
Invezz: How does BitMind keep pace with fast-evolving generative AI models, and is this turning into a perpetual cat-and-mouse game with deepfake creators?
BitMind leverages Bittensor, where a global network of AI developers compete to refine detection models in real time, pooling resources and using crypto-economic incentives to dynamically improve the models and adapt to the latest data being produced by state-of-the-art generative AI models.
Yes, similar to cyber-security is a perpetual cat-and-mouse game between detectors and deepfake scammers.
The way to stay ahead lies in velocity and the ability to adapt quickly, as well as developing a generalised solution that can detect data that the model has not seen before.
Invezz: How accurate are detection tools in real-time settings like video calls? And what protection exists—or should exist—for ordinary users increasingly being targeted?
Our current tools have been benchmarked at 88% accuracy on images and are lagging slightly behind on videos, where real-time intervention is crucial.
Existing protections include browser extensions and web tools like BitMinds AI detector.
More should exist, such as mandatory AI literacy education, and consumer-grade real-time proactive detection systems.
Invezz: Are regulations keeping pace with the rise in deepfake-driven fraud, especially in finance?
For the most part, yes. Although regulations are lagging behind there are a variety of state legislation being introduced targeting fraud and deepfake related crimes.
In the world of the internet and AI I think it will be important to have federal or even global legislation related to deepfake and financial fraud.
Invezz: Would you support mandatory watermarking or provenance standards for AI-generated content? And how is BitMind positioning itself in this evolving threat environment?
Yes, this is already happening with companies like Google and OpenAI using watermarking to help identify AI-generated content.
These standards work, however, the entropy of the internet is so vast I do not believe this can be the catch-all solution to the deepfake problem.
BitMind is positioning itself as the leading consumer deepfake detection service in the world with top “in-the-wild” accuracy.
Our roadmap includes expanding to a mobile-native solution, as well as expanding the scale of our services to incorporate into enterprise use-cases like corporate communication tools and financial services.
The post Interview: ‘This is just the tip of the iceberg,’ says BitMind founder Ken Jon Miyachi on deepfake scams appeared first on Invezz
https://invezz.com/news/2025/07/25/interview-this-is-just-the-tip-of-the-iceberg-says-bitmind-founder-ken-jon-miyachi-on-deepfake-scams/