Reporting someone can feel heavy—even when you are sure you did the right thing. This guide explains typical moderation steps so you are not left imagining worst-case voids.
Intake: what you submit
Most apps collect category (harassment, scams, impersonation, threats), a short description, and optional screenshots. The more factual you are, the faster reviewers triage. Stick to behaviors, not character essays.
Review: humans and models
Automated classifiers cluster similar reports and scan for banned media. Humans enter when context matters—timing, escalation patterns, or edge cases. Neither path is instant at scale; quality brands publish realistic timelines in their help centers.
Outcomes you might see
Depending on severity and evidence, outcomes may include warnings, feature restrictions, suspensions, or bans. You may not receive the full decision letter due to privacy rules, but you should get acknowledgment that the report was received.
Privacy realities
Your account details are visible to trained staff on a need-to-know basis. If someone retaliates by reporting you falsely, document your side calmly. At InstaFuck, we treat retaliation reports as their own category because they distort trust for everyone else.
If you are in immediate danger, contact local emergency services first—apps complement real-world help; they do not replace it.