|
1 | 1 | This repository contains code of face emotion recognition that was developed in the RSF (Russian Science Foundation) project no. 20-71-10010 (Efficient audiovisual analysis of dynamical changes in emotional state based on information-theoretic approach).
|
2 | 2 |
|
3 |
| -Our approach is described in the [arXiv paper](https://arxiv.org/abs/2103.17107) published at [IEEE SISY 2021](https://ieeexplore.ieee.org/document/9582508). The extended version of this paper is under considereation in the international journal. |
| 3 | +If you use our models, please cite the following papers: |
| 4 | +```BibTex |
| 5 | +@inproceedings{savchenko2021facial, |
| 6 | + title={Facial expression and attributes recognition based on multi-task learning of lightweight neural networks}, |
| 7 | + author={Savchenko, Andrey V.}, |
| 8 | + booktitle={Proceedings of the 19th International Symposium on Intelligent Systems and Informatics (SISY)}, |
| 9 | + pages={119--124}, |
| 10 | + year={2021}, |
| 11 | + organization={IEEE}, |
| 12 | + url={https://arxiv.org/abs/2103.17107} |
| 13 | +} |
| 14 | +``` |
| 15 | + |
| 16 | +```BibTex |
| 17 | +@inproceedings{Savchenko_2022_CVPRW, |
| 18 | + author = {Savchenko, Andrey V.}, |
| 19 | + title = {Video-Based Frame-Level Facial Analysis of Affective Behavior on Mobile Devices Using EfficientNets}, |
| 20 | + booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, |
| 21 | + month = {June}, |
| 22 | + year = {2022}, |
| 23 | + pages = {2359-2366}, |
| 24 | + url={https://arxiv.org/abs/2103.17107} |
| 25 | +} |
| 26 | +``` |
| 27 | + |
| 28 | +```BibTex |
| 29 | +@article{savchenko2022classifying, |
| 30 | + title={Classifying emotions and engagement in online learning based on a single facial expression recognition neural network}, |
| 31 | + author={Savchenko, Andrey V and Savchenko, Lyudmila V and Makarov, Ilya}, |
| 32 | + journal={IEEE Transactions on Affective Computing}, |
| 33 | + year={2022}, |
| 34 | + publisher={IEEE}, |
| 35 | + url={https://ieeexplore.ieee.org/document/9815154} |
| 36 | +} |
| 37 | +``` |
4 | 38 |
|
5 |
| -NEWS!!! Our models let our team HSE-NN took the 3rd place in the multi-task learning challenge, 4th places in Valence-Arousal and Expression challenges and 5th place in the Action Unite Detection Challenge in the [third Affective Behavior Analysis in-the-wild (ABAW) Competition](https://ibug.doc.ic.ac.uk/resources/cvpr-2022-3rd-abaw/). Our approach is presented in the [paper](https://arxiv.org/abs/2203.13436) accepted at CVPR 2022 ABAW Workshop. |
| 39 | +**[News]** Our models let our team HSE-NN took the 3rd place in the multi-task learning challenge, 4th places in Valence-Arousal and Expression challenges and 5th place in the Action Unite Detection Challenge in the [third Affective Behavior Analysis in-the-wild (ABAW) Competition](https://ibug.doc.ic.ac.uk/resources/cvpr-2022-3rd-abaw/). Our approach is presented in the [paper](https://arxiv.org/abs/2203.13436) accepted at CVPR 2022 ABAW Workshop. |
6 | 40 |
|
7 | 41 | All the models were pre-trained for face identification task using [VGGFace2 dataset](https://github.com/ox-vgg/vgg_face2). In order to train PyTorch models, [SAM code](https://github.com/davda54/sam) was borrowed.
|
8 | 42 |
|
|
0 commit comments