paint-brush
ClimateNLP: Analyzing Public Sentiment Towards Climate Change: Results and Discussions by@escholar

ClimateNLP: Analyzing Public Sentiment Towards Climate Change: Results and Discussions

tldt arrow

Too Long; Didn't Read

The natural language processing approaches can be applied to the climate change domain as well for finding the causes and leveraging patterns such as public sentiment and discourse towards this global issue.
featured image - ClimateNLP: Analyzing Public Sentiment Towards Climate Change: Results and Discussions
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Ajay Krishnan T. K., School of Digital Sciences; (2) V. S. Anoop, School of Digital Sciences.

5 Results and Discussions

This section presents the results obtained from the experiment using the proposed approach outlined in Section 4. The results, along with a detailed discussion, are given in this section.


Table 2: Precision, Recall, Accuracy, and F-Measure values for the TF-IDF feature encoding


Table 3: Precision, Recall, Accuracy, and F-Measure values for word2Vec


Table 4: Precision, Recall, Accuracy, and F-Measure values for CountVectorizer


Table 5: Precision, Recall, Accuracy, and F-Measure values for TF-IDF + CountVectorizer


Table 1 to 8 shows the accuracy, precision, recall, and f-measure values for RF, SVM, DT, and LR algorithms. For BERT embeddings, RF has 76.78%, 77.46%, 76.78%, and 76.93%, SVM has 64.35%, 63.65%, 64.35%, and 63.70%,



Table 6: Precision, Recall, Accuracy, and F-Measure values for TF-IDF + word2Vec


Table 7: Precision, Recall, Accuracy, and F-Measure values for CountVectorizer + word2Vec


DT has 68.89%, 67.13%, 68.89%, and 67.59%, and LR has 63.81%, 63.48%, 63.81%, and 63.60% for the A, P, R, and F values. Table 4 shows the Accuracy, Precision, Recall, and F-measure values for RF, SVM, DT, and LR algorithms. For ClimateBERT embeddings, RF has 85.22%, 85.73%, 85.22%, and 83.33%, SVM has 75.66%, 76.20%, 75.66%, and 75.07%, DT has 80.62%, 79.88%, 78.62%, and 77.47%, and LR has 73.84%, 72.92%, 73.84%, and 75.69% for the A, P, R, and F values. After training, the model’s performance is evaluated on the test set to assess its ability to predict sentiment. The model is switched to evaluation mode, and predictions are made on the test set. Accuracy, precision, recall, and F1-score are calculated to measure the model’s performance. The results obtained from the evaluation metrics are reported. Accuracy provides an overall measure of correctness, precision measures the proportion of correctly predicted positive sentiments, recall captures the ability to identify all positive sentiments, and the F1-score provides a balanced measure between precision and recall. These metrics provide insights into how well the model predicts sentiment on climate change-related tweets. By following this experimental setup, the methodology ensures that the collected data is cleaned, balanced, and used effectively to train a sentiment analysis model. The results and discussions provide valuable insights into the model’s performance and its ability to analyze sentiment in climate change discussions on Twitter.
바카라사이트 바카라사이트 온라인바카라