Skip to content

Latest commit

 

History

History
8 lines (7 loc) · 1.31 KB

README.md

File metadata and controls

8 lines (7 loc) · 1.31 KB

NLP_filter_toxic_comment

Toxic comments are detected and filtered using the naive Bayes classification.

problem description

Many people think that artificial intelligence will one day replace humans and destroy many jobs every year - without creativity. In the meantime, extremist groups have staged protests, and your company, which has recently made a name for itself, is one of the targets of these groups. Due to the prevalence of Covid, protests can not be held in person, and these groups decide to attack your company site and leave malicious comments. You want to make sure that these comments are filtered before they are posted on the site so that your company's reputation is not lost. The easiest way to do this is to use an employee who identifies and deletes these comments. You have to pay a lot of money with this job, but if you hire a large number of employees, this extremist group will probably achieve its goal; So you get the solution: Instead of leaving easy tasks to people, develop an emotion recognition system that does this automatically.

Training dataset:

The attached dataset contains two rt-polarity.neg and rt-polarity.pos files, which contain the negative and positive comments of a site, respectively.