Please use this identifier to cite or link to this item: https://dspace.iiti.ac.in/handle/123456789/6573
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTanveer, M.en_US
dc.date.accessioned2022-03-17T01:00:00Z-
dc.date.accessioned2022-03-21T10:49:51Z-
dc.date.available2022-03-17T01:00:00Z-
dc.date.available2022-03-21T10:49:51Z-
dc.date.issued2021-
dc.identifier.citationTanveer, M., Sharma, S., & Muhammad, K. (2021). Large-scale least squares twin SVMs. ACM Transactions on Internet Technology, 21(2) doi:10.1145/3398379en_US
dc.identifier.issn1533-5399-
dc.identifier.otherEID(2-s2.0-85107645455)-
dc.identifier.urihttps://doi.org/10.1145/3398379-
dc.identifier.urihttps://dspace.iiti.ac.in/handle/123456789/6573-
dc.description.abstractIn the last decade, twin support vector machine (TWSVM) classifiers have achieved considerable emphasis on pattern classification tasks. However, the TWSVM formulation still suffers from the following two shortcomings: (1) TWSVM deals with the inverse matrix calculation in the Wolfe-dual problems, which is intractable for large-scale datasets with numerous features and samples, and (2) TWSVM minimizes the empirical risk instead of the structural risk in its formulation. With the advent of huge amounts of data today, these disadvantages render TWSVM an ineffective choice for pattern classification tasks. In this article, we propose an efficient large-scale least squares twin support vector machine (LS-LSTSVM) for pattern classification that rectifies all the aforementioned shortcomings. The proposed LS-LSTSVM introduces different Lagrangian functions to eliminate the need for calculating inverse matrices. The proposed LS-LSTSVM also does not employ kernel-generated surfaces for the non-linear case, and thus uses the kernel trick directly. This ensures that the proposed LS-LSTSVM model is superior to the original TWSVM and LSTSVM. Lastly, the structural risk is minimized in LS-LSTSVM. This exhibits the essence of statistical learning theory, and consequently, classification accuracy on datasets can be improved due to this change. The proposed LS-LSTSVM is solved using the sequential minimal optimization (SMO) technique, making it more suitable for large-scale problems. We further proved the convergence of the proposed LS-LSTSVM. Exhaustive experiments on several real-world benchmarks and NDC-based large-scale datasets demonstrate that the proposed LS-LSTSVM is feasible for large datasets and, in most cases, performed better than existing algorithms. © 2021 Association for Computing Machinery.en_US
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.sourceACM Transactions on Internet Technologyen_US
dc.subjectClassification (of information)en_US
dc.subjectInverse problemsen_US
dc.subjectMatrix algebraen_US
dc.subjectOptimizationen_US
dc.subjectSupport vector machinesen_US
dc.subjectClassification accuracyen_US
dc.subjectLagrangian functionsen_US
dc.subjectLarge-scale datasetsen_US
dc.subjectLarge-scale problemen_US
dc.subjectLeast squares twin support vector machinesen_US
dc.subjectSequential minimal optimizationen_US
dc.subjectStatistical learning theoryen_US
dc.subjectTwin support vector machinesen_US
dc.subjectLarge dataseten_US
dc.titleLarge-Scale Least Squares Twin SVMsen_US
dc.typeJournal Articleen_US
Appears in Collections:Department of Mathematics

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetric Badge: