Six-Layer Analysis of Kaito Yaps: You're Surrounded by a Giant Network!
I came across the Kaito project quite early on, initially through scattered mentions by bloggers. At first, I thought it was just another social media analytics tool.
Later, more and more KOLs started promoting it, and by this week, my timeline was practically dominated by Kaito. Particularly noteworthy was that many bloggers who typically post high-quality content were talking about it. Each additional endorsement piqued my curiosity further, as I trust these bloggers' judgment.
It can't be that simple! What I've seen must be just the tip of the iceberg. So I added it to my research agenda.
What exactly is Kaito doing? And what might they be planning to do? Here's my understanding based on my research process.
## Layer 1: Kaito at Face Value - A KOL Brand Value Ranking?
From what's publicly visible ([yaps](https://yaps.kaito.ai/)), this appears to be a product that quantifies Twitter account influence in some way. This influence is represented by "Yapper Mindshare." Mindshare can be understood as the position and relative importance of a brand within its target audience - higher mindshare typically leads to more purchasing behavior.
This reveals how the project defines social account influence - brand value/goodwill. We can compare this to the now-defunct Friend.tech - the latter tried to define influence as financial capital. **The conversion from brand value to financial capital should be one-directional and controllable**. FT didn't have such a valve, so it couldn't hold.
On the homepage, you can clearly see Twitter account rankings based on yaps scores, heatmaps, network graphs, and more.
The top menu bar offers options to view:
1. Yaps for specific topics or categorical topics (like those related to certain projects)
2. Project pre-TGE volume
3. VC influence
Currently, there's no visible explanation of how yaps scores are calculated. In fact, keeping these rules opaque is one of the things Kaito has done well, which I'll discuss later.
This is Layer 1 of Kaito, seemingly just another unremarkable social mining project. Social mining was already played out in previous cycles. Many problems remained unsolved, to name a few: difficulty in automatically defining content value; distorted user behavior incentives; spam content proliferation and farming; community atmosphere deterioration with bad driving out good...
## Layer 2: AI-Assisted Analysis - Making Social Mining Viable?
Scrolling down the yaps product page reveals a detailed dashboard where AI analyzes user content for sentiment, determining whether an account's posts are technical or casual, plagiarized or original, and whether they tend to post spam or valuable content.

Currently, there are only these three metrics, but many more could be added gradually. Just like those "Twitter personality analysis" toys that periodically trend - you could easily add MBTI personality tests. BTW, adding MBTI would actually be quite fun...
This highlights a crucial point that wasn't available in the previous cycle: the role AI can play in data analysis and quantification. Or rather, NLP (Natural Language Processing) has existed for a while but hadn't been applied this way. As the saying goes, why use a sledgehammer to crack a nut when the time isn't right?
Going back to the previously mentioned unsolvable problems of social mining, these issues are actually interconnected, with the fundamental problem being the inability to quantify content value. Now, using AI to analyze the information value of a tweet, blogger sentiment, etc., has become possible. As for how to quantify value itself, that ties into the PageRank topic we'll discuss later.
Once content value can be quantified relatively accurately, the problem of spam content proliferation can be mitigated; issues like farming, low-effort posting, and quality content creators being pushed out can be addressed; and the problem of distorted user behavior incentives can be alleviated. In turn, behavioral incentives can be directed toward: producing quality content.
The current yaps scoring system is still essentially social mining, no argument there. But when I put it this way, doesn't it seem a bit different?
Hold on. My understanding is that yaps is just one piece in Kaito's market strategy. KOLs are the group with the highest volume and easiest viral potential, but as I'll discuss in Layer 3, KOLs are just one of many possible classification angles.
## Layer 3: Data Analytics Tool
As mentioned earlier, yaps currently shows project pre-TGE volume and VC influence. These are derived from the official Twitter account data of corresponding projects/VCs.
Theoretically, as long as the underlying data collection/cleaning/categorization process is sufficiently detailed, similar menu functions can be infinitely expanded, as they're just category aggregations based on different characteristics. For instance, they could roll out Twitter politician influence rankings, xx topic analysis, most popular adult content creators, etc. From a social media sentiment perspective, Kaito will definitely outperform simple data tools that merely calculate keyword frequency or KOL keyword overlap, because Kaito's weightings come from long-term neural network optimization.
Speaking of data analysis, if they can achieve this with Twitter's extremely expensive API data, wouldn't on-chain data collection be even easier? This includes other common platforms. Even if they don't build their own on-chain data solution, partnering with platforms like Arkham to launch alpha tools, monitoring alerts, investment research assistants, and other side products shouldn't be a problem.
Let me give an example of something on-chain data products absolutely can't do.
Operational rhythm analysis.
How would you normally do this? Find the main operational platforms, usually Twitter and Discord, and scrape every announcement, tweet, content, interaction data, comment sentiment, AMAs, Discord chat content, PR materials, etc. For projects with tokens, you'd need to correlate with price charts; for those with NFTs, look at asset transaction records. You'd analyze major operational cycle activities, monitor community sentiment changes and response methods, etc. Whether a project team is good at operations is actually quite important for secondary market investment, but currently, it requires a lot of human resources and effort.
However, if AI is used to assist analysis based on existing data, you'd just need one mature set of algorithmic rules. The same logic can be applied to:
The development process of a trending topic. The evolution of a meme coin. Want to analyze CABAL?
A comprehensive review of the US election process.
...
Is such a product competitive enough? I believe it definitely is. The core lies in the algorithmic accumulation behind yaps - I can do what you can't. With low marginal costs, each side product just needs to precisely target its user group.
## Layer 4: InfoFi? Financialized Information Portal
Remember when I first entered the space, I wrote an article [I Designed a "Rotten Tomatoes + Wiki" Project on Blockchain: How to Better Guide Community Governance?](https://jojonas.notion.site/wiki-0981a45b5f7c4578a2b4db653a473e4a?pvs=4) envisioning something like Douban/Rotten Tomatoes rating site where communities rate projects, with various mechanisms to maintain rating fairness and accuracy.
Project rating isn't special - any portal website can do it, but is it meaningful? Leaving aside that current user habits aren't aligned with this, even if users were accustomed to checking ratings, what's the point of scores that can be easily manipulated?
Looking at some established long-term rating websites like Douban/Rotten Tomatoes/ProductHunt, their approaches typically involve:
1. Using machine learning to filter out potential rating manipulation
2. Weighting registered users differently based on activity/influence
3. Inviting professionals for ratings (essentially DPOS)
Many portal website ratings are meaningless because their main business is based on information categorization - they can't invest heavily in machine learning or user weight calculation. Plus, current user habits aren't there, making the cost-benefit analysis quite clear.
But I'm thinking, for many newcomers who don't understand our complex discussions (like they wouldn't read a word of this article), they need something simple and straightforward telling them: Is this project good? Is this KOL trustworthy? Don't beat around the bush - YES or NO?
A fundamental behavioral pattern is that for things difficult to evaluate intuitively - movies, food, AI models - a rating system will inevitably emerge to help people quickly and intuitively gauge value. This eventually becomes a user habit. Just like how I always check Douban before watching something, or how my friends always check Dianping before eating out.
At this point, you might be thinking of something. Yes, search engines - we'll get to that later.
Let me first explain why I believe a standard, usable information portal is important.
Here are a few examples:
1. We often see complaints about the "400U club" scamming people, but what's the use? It's not that retail investors have no memory - there are just too many scammers to remember. Even if you remember someone, they might rebrand or sell their account and start over. Users naturally aren't good at remembering these things, just like how @0x_Todd recently confused two "Horse guys" - I couldn't help but laugh because most people are the same way, including me.
2. Most portal websites now include ratings, but they're often just simple averages, with no way to know who's behind the rating accounts. Is Elon Musk's 5-star rating of your project the same as your own self-rating? I think not.
3. Consider this question: Why do you think Hurun and Forbes spend so much effort creating rankings every year?
Moody's, one of the world's largest rating agencies, had a net profit of $2.4 billion in 2023. For project teams, running information portal services has many benefits: enhancing industry status and discourse power; charging B2B service fees; collecting C2C subscription fees; and for the less scrupulous, advertising revenue...
Just the first point - discourse power. Even though I dislike discussing power, it's undeniable that sometimes, monetizing power can surpass any business model, regardless of how that power was obtained.
## Layer 5: Search Engine?
Earlier I mentioned PageRank, let me explain.
PageRank is one of Google's core search engine algorithms. Simply put, PageRank measures a webpage's importance - if a webpage is linked to by many other pages, its ranking will be higher. In a sense, this ranking represents the weight of each node (webpage) in the network structure.
Imagine in a network structure, initially every webpage has a weight of 1. Imagine a random surfer who, each time they visit a node, will follow links on that page to other pages, or with a very low probability randomly jump to an unrelated new page (set a damping factor, which we'll ignore for now). Assume this surfer continues jumping between various pages - over time, the frequency of visits to each page can be seen as the node's weight.
Of course, the actual algorithm isn't this simple. I don't understand much more than this. Let's just abstract this basic model.
Now we can return to the content value issue mentioned earlier. Since content has subjective elements, I can't say someone posting research reports is automatically high quality while someone shitposting is low quality - it's not that simple. Or rather, it's partly true, but there will always be a portion of weight that comes from "PageRank."
Let's return to a term commonly used in the internet industry and in meme trading - attention value.
To put it visually, each time your node is viewed, it accumulates some attention value; then PageRank (which in Kaito might be an internal parameter abstracted from yaps) is the relative value of attention you can capture.
So comparing to Google, if Kaito provides Web3 information search services, who would appear first when you search for something?
KOLs: Damn, I thought I was using you, but in the end, you're using me?
Justice warriors against the 400U club...
Just kidding. Let's move away from KOLs to VCs, project teams, CABAL groups, political interest groups with stronger paying ability. If Kaito could evolve to Google's status, these B-end clients need to acquire users - who do they pay? How much? Too expensive? Well, shall we do paid rankings then?
...
## Layer 6: Spring of Mechanism Design?
To be honest, this layer isn't really about Kaito - it's just some inspiration I got from researching the project. If it makes you happy, call it Layer 0 or the basement, that's fine too.
All mechanism designs, no matter how intricate their pre-calculations are, will face challenges from different interest groups once actually implemented.
As the saying goes, "For every policy from above, there's a countermeasure from below" - this speaks to the evolution of game strategies.
A more serious issue is that different groups' interests may have fundamental conflicts, meaning some will always question and stir up trouble in the community. As one key argument, quantification rules for hard-to-quantify "facts" are particularly problematic. Debates about these often end up in meaningless cross-talk.
But AI scoring black boxes don't have this problem.
If AI gives you 80 points and says your yang energy is weak and needs improvement, what do you think?
1. You feel offended, but the offender is AI, so whatever. This is natural community tolerance.
2. Why does AI give others 90 points but only 80 to me? Is something wrong with me? This is AI's technical black box maintaining the high ground in public opinion.
3. AI truly continues to self-iterate - you'll find AI's judgments becoming increasingly accurate? This is the boiling frog effect of cognitive acceptance.
Therefore, a good AI scoring model can solve:
1. Everything becomes quantifiable. If quantification results are poor, it's not that it's unsuitable for quantification - the data just isn't sufficient yet. Once everything can be converted into clear data, many statistics and algorithms can make breakthrough progress.
2. AI scoring is a black box, meaning there's no complete information, and the optimal strategy for game participants becomes even more uncertain.
3. Greatest common divisor of consensus. Previously everyone looked down on each other, thinking their way was better; now everyone has to accept AI as the standard, right? Peace and harmony...
4. Random fun. This would work very well in yield farming.
Of course, AI's ambiguity is a double-edged sword, making it easily used as an excuse for conspiracy theories. These issues will likely be gradually resolved as technology develops.
That's about all I've thought of. Finally, let me share my risk assessment of the project.
**Risk Assessment**
Looking at yaps as the initial product, Kaito's current advantage is based on Twitter, as most crypto social media activity happens there. If Twitter API costs change or they directly modify data access, it could have serious impacts. As for Farcaster, firstly its ecosystem isn't mature enough yet for viral growth, and secondly, due to low costs, competitors can easily enter; in the short term, small tools have competitiveness.
A more core issue is the weighting algorithm - if the final results aren't satisfactory enough, like if most people think A should rank higher than B but it's actually much lower, that's problematic. This isn't about intuition, but rather indicates issues with the algorithm logic or team manipulation. This includes the need to prove AI's neutrality.
Third is whether it can achieve maximum adoption. Yaps can be expensive, targeting elite users - that's fine. But how to get more people to accept and adopt it is actually the hardest part of building a platform. On the B-end, those ranking high will naturally be happy and support it; but what about those ranking low - will they cause trouble? On the C-end, KOLs have been attracted but have also become "liabilities" - if just one person on the rankings gets into trouble, trust will plummet. This is single-point risk * N. The path from launching yaps to maximum adoption will be challenging.
**Disclosure**
I haven't received a penny from Kaito, don't know anyone from Kaito, and due to my personal anti-social engineering principles, I didn't even fill in my wallet after Twitter, missing out on the airdrop. All opinions in this article are my derivative thoughts based on this project and don't necessarily represent what the project will do. The purpose of writing this article was stated at the beginning: Kaito seems more complex than I initially thought, and I wanted to study it.
Welcome to follow and discuss on Twitter: [@jojonas_xyz](https://x.com/jojonas_xyz)
Writing isn't easy - thanks for likes and retweets! Also published on mirror (https://mirror.xyz/jojonas1.eth/ZtMJSungWMN8H5GDawSx-0vuhYP71mSdIMK8al_BlBo)