A machine learning project designed to detect, classify, and filter harmful or offensive comments in online platforms. Using NLP techniques, the system identifies toxic language in real-time, helping maintain healthier communication spaces and reducing the risk of harassment or abuse.
Key Features:
- NLP-based text analysis
- Multi-class toxicity detection
- Real-time content moderation
- Scalable integration with web apps
