Co-training for visual object recognition based on self-supervised models using a cross-entropy regularization
| datacite.alternateIdentifier.citation | Entropy, 23 (4), 2021 | |
| datacite.alternateIdentifier.doi | 10.3390/e23040423 | |
| datacite.alternateIdentifier.issn | 1099-4300 | |
| datacite.creator | Díaz, Gabriel | |
| datacite.creator | Peralta, Billy M. | |
| datacite.creator | Caro, Luis Alberto | |
| datacite.creator | Nicolis, Orietta | |
| datacite.date | 2021 | |
| datacite.rights | Acceso abierto | |
| datacite.subject | Co-training | |
| datacite.subject | Deep Learning | |
| datacite.subject | Self-supervised Learning | |
| datacite.subject | Semi-supervised Learning | |
| datacite.title | Co-training for visual object recognition based on self-supervised models using a cross-entropy regularization | |
| dc.description.abstract | Automatic recognition of visual objects using a deep learning approach has been successfully applied to multiple areas. However, deep learning techniques require a large amount of labeled data, which is usually expensive to obtain. An alternative is to use semi-supervised models, such as co-training, where multiple complementary views are combined using a small amount of labeled data. A simple way to associate views to visual objects is through the application of a degree of rotation or a type of filter. In this work, we propose a co-training model for visual object recognition using deep neural networks by adding layers of self-supervised neural networks as intermediate inputs to the views, where the views are diversified through the cross-entropy regularization of their outputs. Since the model merges the concepts of co-training and self-supervised learning by considering the differentiation of outputs, we called it Differential Self-Supervised Co-Training (DSSCo-Training). This paper presents some experiments using the DSSCo-Training model to wellknown image datasets such as MNIST, CIFAR-100, and SVHN. The results indicate that the proposed model is competitive with the state-of-art models and shows an average relative improvement of 5% in accuracy for several datasets, despite its greater simplicity with respect to more recent approaches. © 2021 Elsevier B.V., All rights reserved. | |
| dc.description.ia_keyword | training, supervised, visual, views, model, self, recognition | |
| dc.format | ||
| dc.identifier.uri | https://repositoriodigital.uct.cl/handle/10925/4349 | |
| dc.language.iso | en | |
| dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | |
| dc.relation | instname: ANID | |
| dc.relation | reponame: Repositorio Digital RI2.0 | |
| dc.rights.driver | info:eu-repo/semantics/openAccess | |
| dc.source | Entropy | |
| dc.subject.ia_ods | ODS 4: Educación de calidad | |
| dc.subject.ia_oecd1n | Ciencias Sociales | |
| dc.subject.ia_oecd2n | Educación | |
| dc.subject.ia_oecd3n | Educación General | |
| dc.type.driver | info:eu-repo/semantics/article | |
| dc.type.driver | http://purl.org/coar/resource_type/c_2df8fbb1 | |
| dc.type.openaire | info:eu-repo/semantics/publishedVersion | |
| dspace.entity.type | Publication | |
| oaire.citationEdition | 2021 | |
| oaire.citationIssue | 4 | |
| oaire.citationTitle | Entropy | |
| oaire.citationVolume | 23 | |
| oaire.fundingReference | ANID FONDECYT 1201478 (Regular) | |
| oaire.licenseCondition | Obra bajo licencia Creative Commons Atribución 4.0 Internacional | |
| oaire.licenseCondition.uri | https://creativecommons.org/licenses/by/4.0/ | |
| oaire.resourceType | Artículo | |
| oaire.resourceType.en | Article | |
| uct.catalogador | jvu | |
| uct.comunidad | Ingeniería | en_US |
| uct.departamento | Departamento de Ingeniería Informática | |
| uct.facultad | Facultad de Ingeniería | |
| uct.indizacion | Science Citation Index Expanded - SCIE | |
| uct.indizacion | Scopus | |
| uct.indizacion | DOAJ | |
| uct.indizacion | Inspec |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Díaz et al. - 2021 - Entropy - Co-Training for Visual Object.pdf
- Size:
- 544.79 KB
- Format:
- Adobe Portable Document Format
- Description:
