At the beginning of this year, we released MACE Micro to fully support ultra-low-power inference scenarios of mobile phones and IoT devices. In this version, we support quantization for MACE Micro and integrate CMSIS5 to support Cortex-M chips better.
We find more and more R&D engineers are using the PyTorch framework to train their models. In previous versions, MACE transformed the PyTorch model by using ONNX format as a bridge. In order to serve PyTorch developers better, we support direct transformation for PyTorch models in this version, which improves the performance of the model inference. At the same time, we cooperated with MEGVII company and support its MegEngine model format. If you trained your models by MegEngine framework, now you can use MACE to deploy the models on mobile phones or IoT devices.
Armv8.2 provides support for half-precision floating-point data processing instructions, in this version we support the fp16 precision computation by Armv8.2 fp16 instructions, which increases inference speed by roughly 40% for models such as mobilenet-v1 model. The bfloat16 (Brain Floating Point) floating-point format is a computer number format occupying 16 bits in computer memory, we also support bfloat16 precision in this version, which increases inference speed by roughly 40% for models such as mobilenet-v1/2 model on some low-end chips.
In this version, we also add the following features:
- Support more operators, such as
GroupNorm
,ExtractImagePatches
,Elu
, etc. - Optimize the performance of the framework and operators, such as the
Reduce
operator. - Support dynamic filter of conv2d/deconv2d.
- Integrate MediaTek APU support on mt6873, mt6885, and mt6853.
Thanks to the following guys who contribute code which makes MACE better.
@ZhangZhijing1, who contributed the bf16 code which was then committed by someone else. @yungchienhsu, @Yi-Kai-Chen, @Eric-YK-Chen, @yzchen, @gasgallo, @lq, @huahang, @elswork, @LovelyBuggies, @freewym.
Compared with mobile devices such as mobile phones, micro-controllers are small, low-energy computing devices, which are often embedded in hardware that only needs basic computing, including household appliances and IoT devices. Billions of microcontrollers are produced every year. MACE adds micro-controller support to fully support ultra-low-power inference scenarios of mobile phones and IoT devices. MACE's micro-controller engine does not rely on any OS, heap memory allocation, C++ library or other third-party libraries except the math library.
MACE supports two kinds of quantization mechanisms: quantization-aware training and post-training quantization. In this version, we add a mixed-use of them. Furthermore, we support Armv8.2 dot product instruction for CPU quantization.
MACE is continuously optimizing the performance. This time, we add ION buffer support for Qualcomm socs, which greatly improves the inference performance of models that need to switch between GPU and CPU. Moreover, we optimize the operators' performance such as ResizeNearestNeighbor
, Deconv
.
In this version, We support many new operators, BatchMatMulV2
and Select
operators for TensorFlow, Deconv2d
, Strided-Slice
, Sigmoid
for Hexagon DSP and fix some bugs on validation and tuning.
Thanks for the following guys who contribute code which makes MACE better. gasgallo
We found that the lack of OP implementations on devices(GPU, Hexagon DSP, etc.) would lead to inefficient model execution, for the memory synchronization between the device and the CPU consumed much time, so we added and enhanced some operators on the GPU( reshape, lpnorm, mvnorm, etc.) and Hexagon DSP (s2d, d2s, sub, etc.) to improve the efficiency of model execution.
In the last version, we supported the Kaldi framework. In Xiaomi we did a lot of work to support the speech recognition model, including the support of flatten, unsample and other operators in onnx, as well as some bug fixes.
MACE is continuously optimizing our compilation tools. This time, we support cmake compilation. Because of the use of ccache for acceleration, the compilation speed of cmake is much faster than the original bazel. Related Docs: https://mace.readthedocs.io/en/latest/user_guide/basic_usage_cmake.html
In this version, We supported detection of perfomance regression by dana , and “ gpu_queue_window” parameter is added to yml file, to solve the UI jam problem caused by GPU task execution. Related Docs: https://mace.readthedocs.io/en/latest/faq.html
Thanks for the following guys who contribute code which make MACE better.
yungchienhsu, gasgallo, albu, yunikkk
- Support mixing usage of CPU and GPU.
- Support ONNX format.
- Support ARM Linux development board.
- Support CPU quantization.
- Update DSP library.
- Add
Depthwise Deconvolution
of Caffe. - Add documents about debug and benchmark.
- Bug fixed.
- Remove all APIs in mace_runtime.h
- Add OpenclContext and GPUContextBuilder API.
- Add MaceEngineConfig API.
- Add MaceStatus API.
- MaceTensor support data format.
Thanks for the following guys who contribute code which make MACE better.
ByronHsu, conansherry, jackwish, herbakamil, tomaszkaliciak, oneTaken, madhavajay, wayen820, idstein, newway1995.
- New work flow and documents.
- Separate the model library from MACE library.
- Reduce the size of static and dynamic library.
- Support
ArgMax
Operations. - Support
Deconvolution
of Caffe. - Support NDK-17b.
- Use file to store OpenCL tuned parameters and Add
SetOpenCLParameterPath
API.
- Add a new
MaceEngine::Init
API with model data file.
- Not unmap the model data file when load model from files with CPU runtime.
- 2D LWS tuning does not work.
- Winograd convolution of GPU failed when open tuning.
- Incorrect dynamic library of host.
Thanks for the following guys who contribute code which make MACE better.
Zero King(@l2dy), James Bie(@JamesBie), Sun Aries(@SunAriesCN), Allen(@allen0125), conansherry(@conansherry), 黎明灰烬(@jackwish)
- Change build and run tools
- Handle runtime failure
- Change interface that report error type
- Improve CPU performance
- Merge CPU/GPU engine to on
- support
float
data_type
when running in GPU
- Return status instead of abort when allocate failed
- Change mace header interfaces, only including necessary methods.