Co-training for visual object recognition based on self-supervised models using a cross-entropy regularization

datacite.alternateIdentifier.citationEntropy, 23 (4), 2021
datacite.alternateIdentifier.doi10.3390/e23040423
datacite.alternateIdentifier.issn1099-4300
datacite.creatorDíaz, Gabriel
datacite.creatorPeralta, Billy M.
datacite.creatorCaro, Luis Alberto
datacite.creatorNicolis, Orietta
datacite.date2021
datacite.rightsAcceso abierto
datacite.subjectCo-training
datacite.subjectDeep Learning
datacite.subjectSelf-supervised Learning
datacite.subjectSemi-supervised Learning
datacite.titleCo-training for visual object recognition based on self-supervised models using a cross-entropy regularization
dc.description.abstractAutomatic recognition of visual objects using a deep learning approach has been successfully applied to multiple areas. However, deep learning techniques require a large amount of labeled data, which is usually expensive to obtain. An alternative is to use semi-supervised models, such as co-training, where multiple complementary views are combined using a small amount of labeled data. A simple way to associate views to visual objects is through the application of a degree of rotation or a type of filter. In this work, we propose a co-training model for visual object recognition using deep neural networks by adding layers of self-supervised neural networks as intermediate inputs to the views, where the views are diversified through the cross-entropy regularization of their outputs. Since the model merges the concepts of co-training and self-supervised learning by considering the differentiation of outputs, we called it Differential Self-Supervised Co-Training (DSSCo-Training). This paper presents some experiments using the DSSCo-Training model to wellknown image datasets such as MNIST, CIFAR-100, and SVHN. The results indicate that the proposed model is competitive with the state-of-art models and shows an average relative improvement of 5% in accuracy for several datasets, despite its greater simplicity with respect to more recent approaches. © 2021 Elsevier B.V., All rights reserved.
dc.description.ia_keywordtraining, supervised, visual, views, model, self, recognition
dc.formatPDF
dc.identifier.urihttps://repositoriodigital.uct.cl/handle/10925/4349
dc.language.isoen
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)
dc.relationinstname: ANID
dc.relationreponame: Repositorio Digital RI2.0
dc.rights.driverinfo:eu-repo/semantics/openAccess
dc.sourceEntropy
dc.subject.ia_odsODS 4: Educación de calidad
dc.subject.ia_oecd1nCiencias Sociales
dc.subject.ia_oecd2nEducación
dc.subject.ia_oecd3nEducación General
dc.type.driverinfo:eu-repo/semantics/article
dc.type.driverhttp://purl.org/coar/resource_type/c_2df8fbb1
dc.type.openaireinfo:eu-repo/semantics/publishedVersion
dspace.entity.typePublication
oaire.citationEdition2021
oaire.citationIssue4
oaire.citationTitleEntropy
oaire.citationVolume23
oaire.fundingReferenceANID FONDECYT 1201478 (Regular)
oaire.licenseConditionObra bajo licencia Creative Commons Atribución 4.0 Internacional
oaire.licenseCondition.urihttps://creativecommons.org/licenses/by/4.0/
oaire.resourceTypeArtículo
oaire.resourceType.enArticle
uct.catalogadorjvu
uct.comunidadIngenieríaen_US
uct.departamentoDepartamento de Ingeniería Informática
uct.facultadFacultad de Ingeniería
uct.indizacionScience Citation Index Expanded - SCIE
uct.indizacionScopus
uct.indizacionDOAJ
uct.indizacionInspec
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Díaz et al. - 2021 - Entropy - Co-Training for Visual Object.pdf
Size:
544.79 KB
Format:
Adobe Portable Document Format
Description: