You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, and good evening. Very concrete work on the paper as well as the approach. 👍
I note the following from the paper
The best feature vector to each attribute is obtained form the designated region via a combination of three shape and color features which are: RGB-Histograms, HOG [19] and LBP [20]. The combination is selected empirically to extract the best feature vector for each attribute. Multi-class SVM classification model using LIBSVM [21] is adopted here for training and classification after dimensionality reduction of the extracted feature vectors using PCA [22].
So, face++ framework is used to identify the regions of interest.
But how do move from regions of interest to the classes of the facial attributes?
Did you also create an intermediate dataset to tag the images with the facial attributes?
What did the training of SVM take as input?
The text was updated successfully, but these errors were encountered:
Hello, and good evening. Very concrete work on the paper as well as the approach. 👍
I note the following from the paper
So, face++ framework is used to identify the regions of interest.
The text was updated successfully, but these errors were encountered: