Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/12564
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAnshul, Adityaen_US
dc.contributor.authorPranav, Gumpili Saien_US
dc.contributor.authorRehman, Mohammad Zia Uren_US
dc.contributor.authorKumar, Nagendraen_US
dc.date.accessioned2023-12-14T12:37:38Z-
dc.date.available2023-12-14T12:37:38Z-
dc.date.issued2023-
dc.identifier.citationAnshul, A., Pranav, G. S., Rehman, M. Z. U., & Kumar, N. (2023). A Multimodal Framework for Depression Detection During COVID-19 via Harvesting Social Media. IEEE Transactions on Computational Social Systems. Scopus. https://doi.org/10.1109/TCSS.2023.3309229en_US
dc.identifier.issn2329-924X-
dc.identifier.otherEID(2-s2.0-85171544882)-
dc.identifier.urihttps://doi.org/10.1109/TCSS.2023.3309229-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/12564-
dc.description.abstractThe recent coronavirus disease (COVID-19) has become a pandemic and has affected the entire globe. During the pandemic, we have observed a spike in cases related to mental health, such as anxiety, stress, and depression. Depression significantly influences most diseases worldwide, making it difficult to detect mental health conditions in people due to unawareness and unwillingness to consult a doctor. However, nowadays, people extensively use online social media platforms to express their emotions and thoughts. Hence, social media platforms are now becoming a large data source that can be utilized for detecting depression and mental illness. However, the existing approaches often overlook data sparsity in tweets and the multimodal aspects of social media. In this article, we propose a novel multimodal framework that combines textual, user-specific, and image analysis to detect depression among social media users. To provide enough context about the user&#x2019en_US
dc.description.abstracts emotional state, we propose the following: 1) an extrinsic feature by harnessing the URLs present in tweets and 2) extracting textual content present in images posted in tweets. We also extract five sets of features belonging to different modalities to describe a user. In addition, we introduce a deep learning model, the visual neural network (VNN), to generate embeddings of user-posted images, which are used to create the visual feature vector for prediction. We contribute a curated COVID-19 dataset of depressed and nondepressed users for research purposes and demonstrate the effectiveness of our model in detecting depression during the COVID-19 outbreak. Our model outperforms the existing state-of-the-art methods over a benchmark dataset by 2%&#x2013en_US
dc.description.abstract8% and produces promising results on the COVID-19 dataset. Our analysis highlights the impact of each modality and provides valuable insights into users&#x2019en_US
dc.description.abstractmental and emotional states. IEEEen_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.sourceIEEE Transactions on Computational Social Systemsen_US
dc.subjectBlogsen_US
dc.subjectCoronavirus disease (COVID-19)en_US
dc.subjectCOVID-19en_US
dc.subjectdeep learningen_US
dc.subjectDepressionen_US
dc.subjectdepressionen_US
dc.subjectFeature extractionen_US
dc.subjectmachine learningen_US
dc.subjectmultimodal analysisen_US
dc.subjectsocial mediaen_US
dc.subjectSocial networking (online)en_US
dc.subjectSurveysen_US
dc.subjectVisualizationen_US
dc.titleA Multimodal Framework for Depression Detection During COVID-19 via Harvesting Social Mediaen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Computer Science and Engineering
Department of Metallurgical Engineering and Materials Sciences

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: