This is the official Pytorch implementation of the paper:
Yizhi Wang and Zhouhui Lian. DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning. SIGGRAPH Asia. 2021.
Paper: arxiv Supplementary: Link Homepage: DeepVecFont
Given a few vector glyphs of a font as reference, our model generates the full vector font:
Input glyphs:
Synthesized glyphs by DeepVecFont:Input glyphs:
Synthesized glyphs by DeepVecFont:Input glyphs:
Synthesized glyphs by DeepVecFont:Rendered:
Outlines:Rendered:
- python 3.9
- Pytorch 1.9 (it may work on some lower versions, but not tested)
Please use Anaconda to build the environment:
conda create -n dvf python=3.9
source activate dvf
Install pytorch via the instructions.
- Others
conda install tensorboardX scikit-image
We utilize diffvg to refine our generated vector glyphs in the testing phase. Please go to https://github.com/BachiLi/diffvg see how to install it.
Important (updated 2021.10.19): You need first replace the original diffvg/pydiffvg/save_svg.py
with this and then install.
- The Vector Font dataset Download links: Google Drive
Please download the vecfont_dataset
dir and put it under ./data/
.
(This dataset is a subset from SVG-VAE, ICCV 2019.
The details about how to create from your own data are shown below.)
- The Image Super-resolution dataset
This dataset is too huge so I suggest to create it by yourself (see the details below about creating your own dataset).
- The mean and stdev files Download links: Google Drive
Please Download them and put it under ./data/
.
- The Neural Rasterizer Download links: Google Drive
Please download the dvf_neural_raster
dir and put it under ./experiments/
.
- The Image Super-resolution model Download links: Google Drive.
Please download the image_sr
dir and put it under ./experiments/
.
- The Main model Download links: Google Drive.
Please download the dvf_main_model
dir and put it under ./experiments/
.
Note that recently we switched from Tensorflow to Pytorch, we may update the models that have better performances.
To train our main model, run
python main.py --mode train --experiment_name dvf --model_name main_model
The configurations can be found in options.py
.
To test our main model, run
python test_sf.py --mode test --experiment_name dvf --model_name main_model --test_epoch 1500 --batch_size 1 --mix_temperature 0.0001 --gauss_temperature 0.01
This will output the synthesized fonts without refinements. Note that batch_size
must be set to 1. The results will be written in ./experiments/dvf_main_model/results/
.
To refinement the vector glyphs, run
python refinement.mp.py --experiment_name dvf --fontid 14 --candidate_nums 20
where the fontid
denotes the index of testing font. The results will be written in ./experiments/dvf_main_model/results/0014/svgs_refined/
. Set num_processes
according to your GPU's computation capacity.
We have pretrained the neural rasterizer and image super-resolution model. If you want to train them yourself:
To train the neural rasterizer:
python train_nr.py --mode train --experiment_name dvf --model_name neural_raster
To train the image super-resolution model:
python train_sr.py --mode train --name image_sr
- Prepare ttf/otf files
Put the ttf/otf files in ./data_utils/font_ttfs/train
and ./data_utils/font_ttfs/test
, and organize them as 0000.ttf
, 0001.ttf
, 0002.ttf
...
- Deactivate the conda environment and install Fontforge
for python > 3.0:
conda deactivate
apt install python3-fontforge
- Get SFD files via Fontforge
cd data_utils
python convert_ttf_to_sfd_mp.py --split train
python convert_ttf_to_sfd_mp.py --split test
- Generate glyph images
python write_glyph_imgs.py --split train
python write_glyph_imgs.py --split test
- package them to pkl files
python write_data_to_pkl.py --split train
python write_data_to_pkl.py --split test
Note: (1) We package the image and sequence files all into the pkls, which is memory-consuming when training. It is a better way to write the file paths and read the files from disks/drives when training (TO-DO).
(2) If you use the mean and stddev files calculated from your own data, you need to retrain the neural rasterizer. For English fonts, just use the mean and stddev files we provided.
If you use this code or find our work is helpful, please consider citing our work:
@article{wang2021deepvecfont,
author = {Wang, Yizhi and Lian, Zhouhui},
title = {DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning},
journal = {ACM Transactions on Graphics},
numpages = {15},
volume={40},
number={6},
month = December,
year = {2021},
doi={10.1145/3478513.3480488}
}