Bluesky released its first transparency report this week, detailing the efforts of its Trust & Safety team alongside initiatives like age-assurance compliance, monitoring of influence operations, and automated labeling.
The social media platform, a competitor to X and Threads, saw a significant growth of nearly 60% in 2025, rising from 25.9 million to 41.2 million users. This figure includes accounts on Bluesky’s own infrastructure and those operated independently as part of the decentralized social network using Bluesky’s AT Protocol.
Over the past year, users posted 1.41 billion times, accounting for 61% of all posts ever made on Bluesky. Among these, 235 million included media, which represented 62% of all media shared on the platform to date.
The report also highlighted a substantial increase in legal requests from law enforcement, government regulators, and legal representatives, with a total of 1,470 requests in 2025—up from 238 in 2024.
While Bluesky had provided moderation reports in 2023 and 2024, this is its first comprehensive transparency report that covers other aspects like regulatory compliance and account verification.
Moderation reports from users increased by 54%. In comparison to 2024, where Bluesky experienced a 17x rise in moderation reports, the latest figures show an increase from 6.48 million reports in 2024 to 9.97 million in 2025.
Despite the surge, Bluesky pointed out that the growth in reports closely paralleled its 57% increase in user numbers during the same period.
Approximately 3% of the user base, or 1.24 million users, submitted reports in 2025. The leading categories included “misleading” content, which encompasses spam at 43.73%, followed by “harassment” at 19.93%, and sexual content at 13.54%. A catch-all “other” category represented 22.14% of reports and included issues like violence, child safety, site rule violations, and self-harm, which made up smaller shares.
In the “misleading” category, which totaled 4.36 million reports, spam made up 2.49 million.
Regarding harassment, hate speech represented the largest segment of the 1.99 million reports, with around 55,400 reports. Other notable areas included targeted harassment (about 42,520), trolling (29,500), and doxxing (about 3,170). Bluesky noted that many harassment reports included vague antisocial behaviors, such as rude comments that didn’t fit neatly into any one category.
Most sexual content reports (1.52 million) related to mislabeling, indicating that adult content wasn’t properly tagged for better user moderation. A smaller number focused on nonconsensual intimate imagery (about 7,520), abuse content (about 6,120), and deepfakes (over 2,000).
Reports concerning violence (24,670 total) were divided into subcategories like threats or incitement (approximately 10,170 reports), glorification of violence (6,630), and extremist content (3,230).
Alongside user reports, Bluesky’s automated system flagged 2.54 million potential violations.
One area of improvement noted was a decrease in daily reports of antisocial behavior, which dropped by 79% after implementing a system to identify toxic replies and reduced their visibility, similar to measures used by X.
Bluesky also experienced a month-over-month reduction in user reports, with reports per 1,000 monthly active users decreasing by 50.9% from January to December.
Outside of moderation, Bluesky stated it removed 3,619 accounts suspected of influence operations, predominantly linked to activities from Russia.
In terms of takedowns and legal requests, Bluesky has become more aggressive with its moderation and enforcement. In 2025, it took down 2.44 million items, including accounts and posts, compared to just 66,308 accounts in the previous year, with automated tools removing 35,842 accounts.
Moderators contributed by taking down an additional 6,334 records, while automated systems accounted for 282 removals.
Bluesky also issued 3,192 temporary suspensions in 2025, alongside 14,659 permanent removals for ban evasion. Most of these permanent suspensions targeted accounts engaging in inauthentic behavior, spam networks, and impersonation.
However, the report indicated that Bluesky prefers labeling content rather than outright banning users. In 2025, it applied 16.49 million labels to content, a 200% increase year-over-year, while account takedowns grew by 104%, from 1.02 million to 2.08 million, primarily targeting adult and suggestive content or nudity.
