<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/9539">
    <title>DSpace Collection:</title>
    <link>https://dspace.iiti.ac.in:8080/jspui/handle/123456789/9539</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18096" />
        <rdf:li rdf:resource="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17962" />
        <rdf:li rdf:resource="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17961" />
        <rdf:li rdf:resource="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17960" />
      </rdf:Seq>
    </items>
    <dc:date>2026-05-12T17:01:13Z</dc:date>
  </channel>
  <item rdf:about="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18096">
    <title>Investigations in AI-driven cybersecurity methods for evasive and obfuscated malware detection [RESTRICTED THESIS-18 Months]</title>
    <link>https://dspace.iiti.ac.in:8080/jspui/handle/123456789/18096</link>
    <description>Title: Investigations in AI-driven cybersecurity methods for evasive and obfuscated malware detection [RESTRICTED THESIS-18 Months]
Authors: Sharmila S P
Abstract: [Abstract is restricted for 18 Months, due to IPR related issue]</description>
    <dc:date>2026-03-16T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17962">
    <title>Deep learning-assisted interpretable retinopathy of prematurity (ROP) diagnosis</title>
    <link>https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17962</link>
    <description>Title: Deep learning-assisted interpretable retinopathy of prematurity (ROP) diagnosis
Authors: Trivedi, Urvesh
Abstract: Retinopathy of Prematurity (ROP) is a sight-threatening retinal eye disease primarily affecting premature babies due to the growth of abnormal blood vessels in the retina. Early detection and timely treatment is very important to prevent irreversible vision loss; however, effective screening for ROP is limited by the lack of resources&#xD;
and trained ophthalmologists, especially in underserved and rural areas. In recent years, several studies have been conducted on the development of reliable AI-based screening systems. However, due to the lack of well-annotated public datasets most of them gives partial diagnostic solutions and are limited to experimental research, using single-central datasets. Our work presents a deep learning-assisted diagnostic framework for automated and interpretable ROP screening. The proposed system is composed of three key modules: (1) zone separation, (2) ridge (or demarcation line) detection, and (3) blood vessel segmentation—each targeting clinically relevant retinal features defined by the International Classification of ROP (ICROP) guidelines. To&#xD;
support and evaluate the framework, we created a comprehensive and expert-annotated dataset - Macretina, consisting of 1,432 retinal fundus images from 112 premature infants. Finally, we integrated our novel diagnostic framework into a lightweight mobile application, designed for real-time deployment in neonatal care units. Our proposed solution demonstrates high diagnostic accuracy, interpretability, and scalability, offering&#xD;
a clinically viable tool for early ROP screening, especially in low-resource and telemedicine settings.</description>
    <dc:date>2026-02-25T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17961">
    <title>UAV-enabled semantic segmentation for precision farming using deep learning</title>
    <link>https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17961</link>
    <description>Title: UAV-enabled semantic segmentation for precision farming using deep learning
Authors: Kanade, Aditya
Abstract: In this work, we study the semantic segmentation of captured UAV (Unmanned Aerial Vehicle) images for enhanced crop and weed segmentation in precision agriculture.&#xD;
In the existing literature, researchers studied segmentation techniques; however, there is a need for deep feature extraction to capture the spatial and contextual information, especially for the complex agriculture domain, which involves the overlapping of crop, weed, and background pixels. To address this, we present VResUNet++ architecture that combines VGG16 and ResNet50 in the backbone of UNet for deep semantic feature extraction and helps in improving the segmentation accuracy and performance.&#xD;
This improved segmentation method helps in weed detection, crop health&#xD;
monitoring, and early disease detection. Our hybrid model has outperformed the state-of-the-art models like UNet, UNetResNet50, and UNetVGG16. The outcome of the extensive experiment shows a significant improvement in the precision of 99.83%, recall of 98.65%, and accuracy of 98.69% on the Weedmap dataset.</description>
    <dc:date>2025-07-20T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17960">
    <title>Emotion-aware dual cross-attentive neural network with label fusion for stance detection in misinformative social media content</title>
    <link>https://dspace.iiti.ac.in:8080/jspui/handle/123456789/17960</link>
    <description>Title: Emotion-aware dual cross-attentive neural network with label fusion for stance detection in misinformative social media content
Authors: Pangtey, Lata
Abstract: Stance detection determines a user’s opinion toward a particular target or statement. The task helps analyze underlying biases in shared information and combat misinformation. Social media generates massive amounts of user-generated content (UGC). This content often conveys implicit opinions which contribute to the spread&#xD;
of misinformation. We propose a Stance Prediction through a Label-fused dual cross-Attentive Emotion-aware neural Network (SPLAENet) in misinformative social media user-generated content. It uses a dual cross-attention mechanism. This mechanism focuses on relevant parts of source text in the context of reply text, and vice versa. We incorporate emotions to distinguish between stance categories. Emotional alignment or divergence between texts helps separate different stances. We also employ label fusion that uses distance-metric learning to align extracted features with stance labels. This technique improves the method’s ability to accurately distinguish between stances. Extensive experiments demonstrate the significant improvements achieved by SPLAENet over existing state-of-the-art methods. SPLAENet improves over existing methods across three datasets. On RumourEval dataset, our method shows an average gain of 8.92% in accuracy and 17.36% in F1-score. On the SemEval dataset, it gains 7.02% in accuracy and 10.92% in F1-score. On the P-stance dataset, it shows average gains of 10.03% in accuracy and 11.18% in F1-score. These results validate the effectiveness of the proposed method for stance detection in the context of misinformative social media content.</description>
    <dc:date>2026-01-27T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

