MATLAB and Simulink can be used to create optimised code for full AI applications, including pre- and post-processing algorithms, for use on CPUs, GPUs, FPGAs, and SoCs. AI models for inference on resource-constrained hardware can be compressed using techniques for hyperparameter tuning, quantization, and network pruning. Deep Learning HDL Toolbox allows for prototyping and implementation of deep learning networks on FPGAs and SoCs.
