Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/16772
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSaini, Krishanuen_US
dc.contributor.authorSethi, Anikeiten_US
dc.contributor.authorSingh, Riturajen_US
dc.contributor.authorTiwari, Arunaen_US
dc.contributor.authorSaurav, Sumeeten_US
dc.contributor.authorSingh, Sanjayen_US
dc.date.accessioned2025-09-04T12:47:47Z-
dc.date.available2025-09-04T12:47:47Z-
dc.date.issued2026-
dc.identifier.citationSaini, K., Sethi, A., Singh, R., Tiwari, A., Saurav, S., & Singh, S. (2026). CUNIT-GAN: Constraining Latent Space for Unsupervised Multi-domain Image-to-Image Translation via Generative Adversarial Network. Communications in Computer and Information Science, 2473 CCIS, 159–174. https://doi.org/10.1007/978-3-031-93688-3_12en_US
dc.identifier.isbn9789819671748-
dc.identifier.isbn9789819664610-
dc.identifier.isbn9789819666874-
dc.identifier.isbn9783031936968-
dc.identifier.isbn9783031941207-
dc.identifier.isbn9789819669653-
dc.identifier.isbn9783031961953-
dc.identifier.isbn9783031937026-
dc.identifier.isbn9789819670079-
dc.identifier.isbn9789819699933-
dc.identifier.issn1865-0937-
dc.identifier.issn1865-0929-
dc.identifier.otherEID(2-s2.0-105012026788)-
dc.identifier.urihttps://dx.doi.org/10.1007/978-3-031-93688-3_12-
dc.identifier.urihttps://dspace.iiti.ac.in:8080/jspui/handle/123456789/16772-
dc.description.abstractImage-to-image translation has gained significant interest due to the success of deep learning models that enforce cycle consistency constraints. However, the recent studies are particularly limited to a subset of domains with significant constraints on style or texture variations. Also, these models show limited performance in multi-domain settings where one image is translated to numerous domains. We propose a Constrained Unsupervised Image-to-Image Generative Adversarial Network (CUNIT-GAN) to address the above problems. It consists of an asymmetric Auto-encoder (AE) based Generator network and a dual-purpose Discriminator network that detects real and fake samples and classifies the input image. This study focuses on enhancing the explainability and representation power of the multidomain latent space through our novel latent contrastive loss, which leads to the clustering of class-level feature embeddings and decoupling of latent space. The effectiveness of CUNIT-GAN is demonstrated through a comprehensive qualitative and quantitative analysis conducted on benchmark multi-domain image datasets. © 2025 Elsevier B.V., All rights reserved.en_US
dc.language.isoenen_US
dc.publisherSpringer Science and Business Media Deutschland GmbHen_US
dc.sourceCommunications in Computer and Information Scienceen_US
dc.subjectContrastive Learningen_US
dc.subjectGenerative Adversarial Networksen_US
dc.subjectImage-to-image Translationen_US
dc.subjectContrastive Learningen_US
dc.subjectDiscriminatorsen_US
dc.subjectGenerative Adversarial Networksen_US
dc.subjectLearning Systemsen_US
dc.subjectTexturesen_US
dc.subjectAdversarial Networksen_US
dc.subjectAuto Encodersen_US
dc.subjectConsistency Constraintsen_US
dc.subjectImage Translationen_US
dc.subjectImage-to-image Translationen_US
dc.subjectInput Imageen_US
dc.subjectLearning Modelsen_US
dc.subjectMulti-domainsen_US
dc.subjectPerformanceen_US
dc.subjectTexture Variationen_US
dc.subjectDeep Learningen_US
dc.titleCUNIT-GAN: Constraining Latent Space for Unsupervised Multi-domain Image-to-Image Translation via Generative Adversarial Networken_US
dc.typeConference Paperen_US
Appears in Collections:Department of Computer Science and Engineering

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: