目錄
一、cube.AI簡介及cubeIDE集成
? ? ? ?1.1 cube.AI介紹
? ? ? ?1.2 cube.AI與cubeIDE集成與安裝
? ? ? ? 1.3 cube.AI支持硬件平臺
? ? ? ? 1.4 cube.AI應(yīng)用的好處
?二、FP-AI-SENSING1
? ? ? ?2.1?FP-AI-SENSING1簡介
? ? ? ? ? 2.2?FP-AI-SENSING1軟件包支持硬件平臺
三、FP-AI-SENSING1部署
? ? ? ?3.1?B-L475E-IOT01A開發(fā)板
? ? ? ? 3.2?FP-AI-SENSING1軟件包下載及配置
? ? ? ? ?3.3 固件燒錄
? ? ? ?3.4?FP-AI-SENSING1示例工程部署
?四、數(shù)據(jù)采集
? ? ? ?4.1 STBLE-Sensor軟件下載安裝
? ? ? ? ?4.2 STBLESensor配置數(shù)據(jù)采集
五、數(shù)據(jù)治理及模型訓(xùn)練
? ? ? ?5.1 從開發(fā)板取出采集記錄數(shù)據(jù)文件
? ? ? ?5.2 神經(jīng)網(wǎng)絡(luò)模型訓(xùn)練
? 六、 cube.AI將訓(xùn)練模型轉(zhuǎn)換為c語言模型
? ? ? ?6.1 創(chuàng)建cube.AI支持的STM32工程
? ? ? ? ?6.2?cube.AI神經(jīng)網(wǎng)絡(luò)配置
? ? ? ? ?6.3 模型分析與PC端驗(yàn)證
? ? ? ? ?6.4 c語言神經(jīng)網(wǎng)絡(luò)模型生成及源碼輸出
?七、c語言神經(jīng)網(wǎng)絡(luò)模型使用
? ? ? ?7.1 C語言神經(jīng)網(wǎng)絡(luò)模型源文件
? ? ? ?7.2 串口功能自定義實(shí)現(xiàn)
? ? ? ? 7.3 c語言神經(jīng)網(wǎng)絡(luò)模型API使用
? ? ? ?7.4 編譯及程序運(yùn)行測試
? ? ? ? ?7.5 補(bǔ)充說明
一、cube.AI簡介及cubeIDE集成
? ? ? ?1.1 cube.AI介紹
????????cube.AI準(zhǔn)確來說是STM32Cube.AI,它是ST公司的打造的STM32Cube生態(tài)體系的擴(kuò)展包X-CUBE-AI,專用于幫助開發(fā)者實(shí)現(xiàn)人工智能開發(fā)。確切地說,是將基于各種人工智能開發(fā)框架訓(xùn)練出來的算法模型,統(tǒng)一轉(zhuǎn)換為c語言支持的源碼模型,然后將c模型與STM32的硬件產(chǎn)品結(jié)合,實(shí)現(xiàn)人工智能模型可直接部署在前端或邊緣端設(shè)備,實(shí)現(xiàn)人工智能就地計(jì)算。關(guān)于cube.AI 的各種信息可以從ST官網(wǎng)上查看和下載其相關(guān)資料:X-CUBE-AI - STM32CubeMX的AI擴(kuò)展包 - STMicroelectronics
??????? cube.AI 以插件形式支持ST相關(guān)開發(fā)平臺如cubeIDE、cubeMX、keil等,整體開發(fā)過程分為三個(gè)主要部分,1)收集及治理數(shù)據(jù),2)訓(xùn)練及驗(yàn)證模型,3)c模型生成及前端或邊緣端部署,如下圖所示:
???????? 目前cube.AI支持各種深度學(xué)習(xí)框架的訓(xùn)練模型,如Keras和TensorFlow? Lite,并支持可導(dǎo)出為ONNX標(biāo)準(zhǔn)格式的所有框架,如PyTorch?、Microsoft??Cognitive Toolkit、MATLAB?等,然后通過 cube.MX可視化配置界面導(dǎo)入這些深度學(xué)習(xí)框架導(dǎo)出的訓(xùn)練模型來配置及生成c模型,進(jìn)而部署在STM32芯片上。
? ? ? ?1.2 cube.AI與cubeIDE集成與安裝
????????在cubeIDE的幫助菜單欄,選擇嵌入式軟件包管理項(xiàng)(Embedded Software Packages Manager)進(jìn)入X-CUBE-AI擴(kuò)展包安裝頁面。選擇X-CUBE-AI需要的版本進(jìn)行安裝即可,如下圖所示,安裝完成后,該版本前面方框呈綠色標(biāo)記。
? ? ? ? 1.3 cube.AI支持硬件平臺
????????得益于ST公司不斷的優(yōu)化及迭代X-CUBE-AI擴(kuò)展包,神經(jīng)網(wǎng)絡(luò)模型生成c模型后得以使用更小的算力資源和幾乎無損的算法精度,因此使其能部署到STM32絕大多數(shù)的芯片上,目前其支持的MCU及MPU型號如下圖所示。
? ? ? ? 1.4 cube.AI應(yīng)用的好處
????????將神經(jīng)網(wǎng)絡(luò)邊緣化部署后,減少延遲、節(jié)約能源、提高云利用率,并通過大限度地減少互聯(lián)網(wǎng)上的數(shù)據(jù)交換來保護(hù)隱私,而結(jié)合X-CUBE-AI使得神經(jīng)網(wǎng)絡(luò)部署在邊緣端端的便捷、靈活、低成本,微機(jī)智能成為更多產(chǎn)品的選擇。
?二、FP-AI-SENSING1
? ? ? ?2.1?FP-AI-SENSING1簡介
????????FP-AI-SENSING1是ST公司提供的STM32Cube.AI示例,可通過BLE(低功耗藍(lán)牙)將物聯(lián)網(wǎng)節(jié)點(diǎn)連接到智能手機(jī),并使用STBLESensor應(yīng)用程序,配置設(shè)備,實(shí)現(xiàn)數(shù)據(jù)采集,使得用于訓(xùn)練神經(jīng)網(wǎng)絡(luò)模型的數(shù)據(jù)更貼近實(shí)際使用場景,具有更好的訓(xùn)練效果和精度。
????????FP-AI-SENSING1軟件包更多介紹及信息請參考ST官網(wǎng):
FP-AI-SENSING1 - 具有基于聲音和運(yùn)動(dòng)感應(yīng)的人工智能(AI)應(yīng)用的超低功耗IoT節(jié)點(diǎn)的STM32Cube功能包 - STMicroelectronics
? ? ? ? 在FP-AI-SENSING1案例頁面,下載源碼包及其數(shù)據(jù)手冊。
? ? ? ? ? 2.2?FP-AI-SENSING1軟件包支持硬件平臺
????????ST公司為FP-AI-SENSING1示例運(yùn)行提供了硬件平臺,支持開發(fā)者快速學(xué)習(xí)了解FP-AI-SENSING1示例,從而了解Cube.AI的開發(fā)過程。
三、FP-AI-SENSING1部署
? ? ? ?3.1?B-L475E-IOT01A開發(fā)板
????????本文采用ST公司的B-L475E-IOT01A開發(fā)板,打開CubeMX工具,選擇Start my project from ST Board,搜索B-L475E-IOT01A,如下圖所示,可以在1、2、3頁面下載開發(fā)板相關(guān)的原理框圖、文檔及案例、說明手冊。
? ? ? ? 3.2?FP-AI-SENSING1軟件包下載及配置
????????下載FP-AI-SENSING1軟件包后,解壓下載的源碼包:en.fp-ai-sensing1.zip,進(jìn)入“STM32CubeFunctionPack_SENSING1_V4.0.3\Projects\B-L475E-IOT01A\Applications\SENSING1\STM32CubeIDE”目錄,用文本編輯工具打開“CleanSENSING1.bat”,(linux系統(tǒng)的,采用CleanSENSING1.sh文件)。
? ? ? ?CleanSENSING1.bat運(yùn)行依賴STM32cube生態(tài)的另一個(gè)開發(fā)工具:STM32CubeProgrammer,該工具可以幫助開發(fā)者讀取、寫入和驗(yàn)證設(shè)備內(nèi)存等。
STM32CubeProg - 用于STM32產(chǎn)品編程的STM32CubeProgrammer軟件 - STMicroelectronics
?????????在STM32CubeProgrammer工具下載頁面,下載該工具及說明手冊:
? ? ? ? ?下載并安裝STM32CubeProgrammer工具,例如本文安裝目錄為:D:\workForSoftware\STM32CubeProgrammer
? ? ? ? ?修改CleanSENSING1.bat依賴工具“STM32CubeProgrammer”的路徑:
? ? ? ? ?3.3 固件燒錄
????????將B-L475E-IOT01A開發(fā)板用Micro USB連接到電腦上,
?????????連接之后,驅(qū)動(dòng)會自動(dòng)安裝,進(jìn)入設(shè)備管理頁面,確認(rèn)串口編號和配置串口參數(shù)。
?????????右鍵CleanSENSING1.bat文件以管理員身份運(yùn)行,將在開發(fā)板安裝引導(dǎo)加載程序和更新固件。
?? ? ? ? 該腳本可以對B-L475E-IOT01A開發(fā)板實(shí)現(xiàn)以下操作,
?完全閃存擦除
?在右側(cè)閃存區(qū)域加載BootLoader
?在右側(cè)閃存區(qū)域加載程序(編譯后)
?重置電路板
? ? ? ?3.4?FP-AI-SENSING1示例工程部署
????????在該目錄下,進(jìn)入“B-L475E-IOT01A”目錄,用CubeIDE打開.project,打開FP-AI-SENSING1工程。
? ? ? ? ?打開該工程后如下圖所示,用戶可調(diào)整源碼在User目錄,關(guān)于本工程信息請查看readme.txt文件。
? ? ? ? ?在main.c函數(shù)中找到Init_BlueNRG_Stack函數(shù),該函數(shù)可以設(shè)置BLE(低功耗藍(lán)牙)的服務(wù)名,
static void Init_BlueNRG_Stack(void)
{
char BoardName[8];
uint16_t service_handle, dev_name_char_handle, appearance_char_handle;
int ret;
for(int i=0; i<7; i++) {
BoardName[i]= NodeName[i+1];
}
? ? ? ? 該函數(shù)采用默認(rèn)的BLE名稱,該默認(rèn)名稱定義在SENSING1.h設(shè)置,例如:IAI_403
? ? ? ? ?現(xiàn)在調(diào)整BLE名稱為AI_Test
static void Init_BlueNRG_Stack(void)
{
// char BoardName[8];
char BoardName[8] = {'A','I','_','T','e','s','t'};
uint16_t service_handle, dev_name_char_handle, appearance_char_handle;
int ret;
for(int i=0; i<7; i++) {
// BoardName[i]= NodeName[i+1];
NodeName[i+1] = BoardName[i];
}
? ? ? ? 配置工程輸出格式支持如下:
? ? ? ? ?配置運(yùn)行設(shè)置如下:
? ? ? ? ?然后編譯及下載程序:
? ? ? ? ?打開串口工具,連接上對于串口,點(diǎn)擊開發(fā)板上的重置按鈕(黑色按鈕),串口日志輸出如下,日志顯示BLE模塊啟動(dòng)成功:
?四、數(shù)據(jù)采集
? ? ? ?4.1 STBLE-Sensor軟件下載安裝
????????確保手機(jī)支持低功耗藍(lán)牙通信,進(jìn)入ST的BLE傳感器應(yīng)用下載頁面,
STBLESensor - 用于安卓和iOS的BLE傳感器應(yīng)用 - STMicroelectronics
? ? ? ? ?下載對應(yīng)的手機(jī)應(yīng)用程序:
? ? ? ? ?4.2 STBLESensor配置數(shù)據(jù)采集
????????本文用的是華為手機(jī)及android系統(tǒng),安裝完成APP后啟動(dòng)進(jìn)入界面(當(dāng)前版本是4.14),點(diǎn)擊搜索,得到AI_Test藍(lán)牙服務(wù)名。
? ? ? ? ?選擇AI_Test藍(lán)牙服務(wù)后,進(jìn)入頁面,(android)在左上角菜單下拉菜單選擇,Data Log(sensing1),進(jìn)入數(shù)據(jù)采集頁面,選擇Accelerometer(三軸加速度計(jì)),并設(shè)置參數(shù)為1.0Hz、26Hz、1.0X。
? ? ? ? ?在數(shù)據(jù)記錄操作頁面,先新建標(biāo)簽,例如Jogging(慢跑),Walking(走了),Stationary(靜立)等等。
? ? ? ? 1)開啟數(shù)據(jù)采集記錄時(shí):先打開標(biāo)簽,再點(diǎn)擊START LOGGING按鈕開啟
? ? ? ? 2)關(guān)閉數(shù)據(jù)采集記錄時(shí),先點(diǎn)擊START LOGGING按鈕關(guān)閉,再關(guān)閉標(biāo)簽。
? ? ? ? ?例如,本文按上述操作記錄了Walking、Jogging兩次數(shù)據(jù)記錄,將生成兩個(gè).csv文件。
五、數(shù)據(jù)治理及模型訓(xùn)練
? ? ? ?5.1 從開發(fā)板取出采集記錄數(shù)據(jù)文件
????????斷掉開發(fā)板與電腦的USB連接,在開發(fā)板背面將在1-2跳線帽拔掉,插入5-6跳線,然后USB連接從ST-LINK連接轉(zhuǎn)到USB-OTG接口接線,如下圖(1->2)。
? ? ? ? ?開發(fā)板重新上電后,保持按下user按鈕(藍(lán)色),同時(shí)按下reset按鈕(黑色),然后先松開reset按鈕,在松開user按鈕,激活USB-OTG。
? ? ? ? ?USB-OTG激活后,開發(fā)板將作為一個(gè)U盤顯示在電腦上,里面有剛才數(shù)據(jù)采集保存的CSV文件。
? ? ? ? ?在“STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR”目錄創(chuàng)建一個(gè)文件目錄Log_data,將該文件拷貝到該目錄下:
? ? ? ?該CSV記錄數(shù)據(jù)格式如下,時(shí)間、行為、三個(gè)傳感數(shù)值:
? ? ? ?5.2 神經(jīng)網(wǎng)絡(luò)模型訓(xùn)練
?????????“Training Scripts\HAR?”是官方提供的一個(gè)人類行為姿態(tài)識別訓(xùn)練項(xiàng)目,默認(rèn)是采用,采用Keras前端+tensorflow后端實(shí)現(xiàn)。先安裝tensorflow、Keras等支持。
? ? ? ? 本文安裝如下:
#已安裝python3.6
pip3 install tensorflow==1.14 -i https://pypi.tuna.tsinghua.edu.cn/simple
ERROR: tensorboard 1.14.0 has requirement setuptools>=41.0.0, but you'll have setuptools 28.8.0 which is incompatible.
python3 -m pip install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple
pip3 install keras==2.2.4 -i https://pypi.tuna.tsinghua.edu.cn/simple
? ? ? ? ?根據(jù)HAR 項(xiàng)目的readme.txt通過pip install -r requirements.txt命令安裝requirements.txt文件制定的相關(guān)模塊,但本文是采用常用命令逐個(gè)安裝各個(gè)模塊的“pip3 install 模塊名==版本 -i 源”
numpy==1.16.4
argparse
os
logging
warnings
datetime
pandas==0.25.1
scipy==1.3.1
matplotlib==3.1.1
mpl_toolkits
sklearn-learn==0.21.3
keras==2.2.4
tensorflow==1.14.0
tqdm==4.36.1
keras-tqdm==2.0.1
? ? ? ? ?完成安裝后,進(jìn)入datasets目錄,打開ReplaceWithWISDMDataset.txt文件,根據(jù)其提供的網(wǎng)址去下載
? ? ? ? ?下載WISDM實(shí)驗(yàn)室的數(shù)據(jù)集支持。
? ? ? ? ?下載文件如下,將這些文件拷貝到datasets目錄下覆蓋。
????????打開RunMe.py文件,可以看到關(guān)于各個(gè)運(yùn)行參數(shù)的設(shè)置:
? ? ? ?? 運(yùn)行python3 .\RunMe.py -h命令,查看運(yùn)行參數(shù)含義,其中:--dataset使用的是前面下載的WISDM實(shí)驗(yàn)室的數(shù)據(jù)集來訓(xùn)練模型,而--dataDir是指定采用自行采集的數(shù)據(jù)集訓(xùn)練模型:
PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py -h
Using TensorFlow backend.
usage: RunMe.py [-h] [--model MODEL] [--dataset DATASET] [--dataDir DATADIR]
[--seqLength SEQLENGTH] [--stepSize STEPSIZE] [-m MERGE]
[--preprocessing PREPROCESSING] [--trainSplit TRAINSPLIT]
[--validSplit VALIDSPLIT] [--epochs N] [--lr LR]
[--decay DECAY] [--batchSize N] [--verbose N]
[--nrSamplesPostValid NRSAMPLESPOSTVALID]
Human Activity Recognition (HAR) in Keras with Tensorflow as backend on WISDM
and WISDM + self logged datasets
optional arguments:
-h, --help show this help message and exit
--model MODEL choose one of the two availavle choices, IGN or GMP, (
default = IGN )
--dataset DATASET choose a dataset to use out of two choices, WISDM or
AST, ( default = WISDM )
--dataDir DATADIR path to new data collected using STM32 IoT board
recorded at 26Hz as sampling rate, (default = )
--seqLength SEQLENGTH
input sequence lenght (default:24)
--stepSize STEPSIZE step size while creating segments (default:24, equal
to seqLen)
-m MERGE, --merge MERGE
if to merge activities (default: True)
--preprocessing PREPROCESSING
gravity rotation filter application (default = True)
--trainSplit TRAINSPLIT
train and test split (default = 0.6 (60 precent for
train and 40 precent for test))
--validSplit VALIDSPLIT
train and validation data split (default = 0.7 (70
percent for train and 30 precent for validation))
--epochs N number of total epochs to run (default: 20)
--lr LR initial learning rate
--decay DECAY decay in learning rate, (default = 1e-6)
--batchSize N mini-batch size (default: 64)
--verbose N verbosity of training and test functions in keras, 0,
1, or 2. Verbosity mode. 0 = silent, 1 = progress bar,
2 = one line per epoch (default: 1)
--nrSamplesPostValid NRSAMPLESPOSTVALID
Number of samples to save from every class for post
training and CubeAI conversion validation. (default =
2)
PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>
????????在RunMe.py文件后面加入下面語句:
#保存Cube.AI支持的數(shù)據(jù)集格式,用于后續(xù)驗(yàn)證測試使用
testx_f=resultDirName+"testx.npy"
testy_f=resultDirName+"testy.npy"
np.save(testx_f,TestX)
np.save(testy_f,TestY)
? ? ? ? 打開命令工具,輸入命令python3 .\RunMe.py --dataDir=Log_data ,可以根據(jù)實(shí)際需要進(jìn)行參數(shù)設(shè)置,本文先采用默認(rèn)參數(shù)訓(xùn)練模型,輸出日志如下,這顯然是一個(gè)分類問題,分類為Jogging 、Stationary 、Stairs 、Walking,有卷積層、池化層、2全連接層、壓平層、Dropout層等。
PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py --dataDir=Log_data
Using TensorFlow backend.
Running HAR on WISDM dataset, with following variables
merge = True
modelName = IGN,
segmentLength = 24
stepSize = 24
preprocessing = True
trainTestSplit = 0.6
trainValidationSplit = 0.7
nEpochs = 20
learningRate = 0.0005
decay =1e-06
batchSize = 64
verbosity = 1
dataDir = Log_data
nrSamplesPostValid = 2
Segmenting Train data
Segments built : 100%|███████████████████████████████████████████████████| 27456/27456 [00:28<00:00, 953.24 segments/s]
Segmenting Test data
Segments built : 100%|██████████████████████████████████████████████████| 18304/18304 [00:14<00:00, 1282.96 segments/s]
Segmentation finished!
preparing data file from all the files in directory Log_data
parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_17s.csv
parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_53s.csv
Segmenting the AI logged Train data
Segments built : 100%|████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 3133.35 segments/s]
Segmenting the AI logged Test data
Segments built : 100%|████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 2852.35 segments/s]
Segmentation finished!
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 9, 3, 24) 408
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 3, 3, 24) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 216) 0
_________________________________________________________________
dense_1 (Dense) (None, 12) 2604
_________________________________________________________________
dropout_1 (Dropout) (None, 12) 0
_________________________________________________________________
dense_2 (Dense) (None, 4) 52
=================================================================
Total params: 3,064
Trainable params: 3,064
Non-trainable params: 0
_________________________________________________________________
Train on 19263 samples, validate on 8216 samples
Epoch 1/20
2023-01-24 14:41:03.484083: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
19263/19263 [==============================] - 3s 167us/step - loss: 1.1442 - acc: 0.5430 - val_loss: 0.6674 - val_acc: 0.7372
Epoch 2/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.7173 - acc: 0.7089 - val_loss: 0.5126 - val_acc: 0.7928
Epoch 3/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.5954 - acc: 0.7522 - val_loss: 0.4470 - val_acc: 0.8051
Epoch 4/20
19263/19263 [==============================] - 1s 39us/step - loss: 0.5288 - acc: 0.7810 - val_loss: 0.4174 - val_acc: 0.8335
Epoch 5/20
19263/19263 [==============================] - 1s 36us/step - loss: 0.4925 - acc: 0.7994 - val_loss: 0.3897 - val_acc: 0.8477
Epoch 6/20
19263/19263 [==============================] - 1s 35us/step - loss: 0.4647 - acc: 0.8173 - val_loss: 0.3607 - val_acc: 0.8647
Epoch 7/20
19263/19263 [==============================] - 1s 37us/step - loss: 0.4404 - acc: 0.8301 - val_loss: 0.3493 - val_acc: 0.8777
Epoch 8/20
19263/19263 [==============================] - 1s 38us/step - loss: 0.4200 - acc: 0.8419 - val_loss: 0.3271 - val_acc: 0.8827
Epoch 9/20
19263/19263 [==============================] - 1s 38us/step - loss: 0.3992 - acc: 0.8537 - val_loss: 0.3163 - val_acc: 0.8890
Epoch 10/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.3878 - acc: 0.8576 - val_loss: 0.3039 - val_acc: 0.8991
Epoch 11/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.3799 - acc: 0.8667 - val_loss: 0.2983 - val_acc: 0.8985
Epoch 12/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.3662 - acc: 0.8736 - val_loss: 0.2922 - val_acc: 0.9007
Epoch 13/20
19263/19263 [==============================] - 1s 36us/step - loss: 0.3613 - acc: 0.8760 - val_loss: 0.2837 - val_acc: 0.9051
Epoch 14/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.3574 - acc: 0.8775 - val_loss: 0.2910 - val_acc: 0.8985
Epoch 15/20
19263/19263 [==============================] - 1s 39us/step - loss: 0.3513 - acc: 0.8796 - val_loss: 0.2814 - val_acc: 0.9080
Epoch 16/20
19263/19263 [==============================] - 1s 38us/step - loss: 0.3482 - acc: 0.8816 - val_loss: 0.2737 - val_acc: 0.9116
Epoch 17/20
19263/19263 [==============================] - 1s 35us/step - loss: 0.3362 - acc: 0.8875 - val_loss: 0.2742 - val_acc: 0.9114
Epoch 18/20
19263/19263 [==============================] - 1s 38us/step - loss: 0.3325 - acc: 0.8892 - val_loss: 0.2661 - val_acc: 0.9137
Epoch 19/20
19263/19263 [==============================] - 1s 40us/step - loss: 0.3257 - acc: 0.8927 - val_loss: 0.2621 - val_acc: 0.9161
Epoch 20/20
19263/19263 [==============================] - 1s 37us/step - loss: 0.3249 - acc: 0.8918 - val_loss: 0.2613 - val_acc: 0.9188
12806/12806 [==============================] - 0s 25us/step
Accuracy for each class is given below.
Jogging : 97.28 %
Stationary : 98.77 %
Stairs : 66.33 %
Walking : 87.49 %
PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>
? ? ? ? 訓(xùn)練模型及相關(guān)輸出信息在results目錄下,每次訓(xùn)練輸出依據(jù)時(shí)間生成一個(gè)獨(dú)立目錄,由于是keras訓(xùn)練模型,因此輸出訓(xùn)練模型是一個(gè)名為*.h5格式文件,例如har_IGN.h5:
? 六、 cube.AI將訓(xùn)練模型轉(zhuǎn)換為c語言模型
? ? ? ?6.1 創(chuàng)建cube.AI支持的STM32工程
????????在CubeIDE中新建一個(gè)STM32項(xiàng)目,在cubeMX中選擇以開發(fā)板形式創(chuàng)建
? ? ? ? ?創(chuàng)建一個(gè)B-L475E-IOT01A_cube.ai工程名的STM32工程,如下圖。
? ? ? ? ?完成創(chuàng)建后,雙擊.ioc文件打開cube.MX配置界面。
? ? ? ? ?6.2?cube.AI神經(jīng)網(wǎng)絡(luò)配置
????????選擇X-CUBE-AI包支持,回到主頁面后,會多出software Packs欄及多出STMicroelectronics .X-CUBE-AI選項(xiàng),進(jìn)入該頁面,勾選下圖標(biāo)識的2、3 項(xiàng),在5中選擇采用哪個(gè)串口支持程序及調(diào)試。
? ? ? ? ?知識點(diǎn),在X-CUBE-AI配置選項(xiàng)頁面,??繒r(shí),會出現(xiàn)說明框,快捷鍵“CTRL+D”會進(jìn)一步出現(xiàn)X-CUBE-AI相關(guān)文檔,
? ? ? ? ?有詳細(xì)的文檔資料:
? ? ? ? ?或者也可以從cube.AI安裝目錄直接進(jìn)入,例如:D:\workForSoftware\STM32CubeMX\Repository\Packs\STMicroelectronics\X-CUBE-AI\7.3.0\Documentation
? ? ? ? 另外,需要注意,開啟X-CUBE-AI支持后,其依賴CRC功能,會自動(dòng)開啟CRC。
? ? ? ? ?6.3 模型分析與PC端驗(yàn)證
????????添加(add network)神經(jīng)網(wǎng)絡(luò)如下,在3中可以修改神經(jīng)網(wǎng)絡(luò)模型名稱,在4中選擇支持框架及選擇模型文件,例如“STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\har_IGN.h5”,在5、6中,可以選擇隨機(jī)數(shù)據(jù)進(jìn)行模型研制,也可以選擇生成的研制數(shù)據(jù)進(jìn)行驗(yàn)證(前面訓(xùn)練模型時(shí),在RunMe.py文件后面加入語句,輸出testx.npy、testy.npy文件):
? ? ? ? ?可以點(diǎn)擊設(shè)置按鈕進(jìn)入,在該頁面下可以對神經(jīng)網(wǎng)絡(luò)進(jìn)行更多設(shè)置和更詳細(xì)信息查看,主要是以模型優(yōu)化為主,本文先保持默認(rèn)。
?????????點(diǎn)擊分析按鈕(Analyze),輸出該模型相關(guān)信息及部署模型需要的計(jì)算資源(ram、flash等):
Analyzing?model
D:/workForSoftware/STM32CubeMX/Repository/Packs/STMicroelectronics/X-CUBE-AI/7.3.0/Utilities/windows/stm32ai?analyze?--name?har_ign?-m?D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training?Scripts/HAR/results/2023_Jan_11_17_50_03/har_IGN.h5?--type?keras?--compression?none?--verbosity?1?--workspace?C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace465785871649500151581099545474794?--output?C:\Users\py_hp\.stm32cubemx\network_output?--allocate-inputs?--allocate-outputs?
Neural?Network?Tools?for?STM32AI?v1.6.0?(STM.ai?v7.3.0-RC5)
?
?Exec/report?summary?(analyze)
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?model?file?????????:???D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_11_17_50_03\har_IGN.h5
?type???????????????:???keras????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?c_name?????????????:???har_ign??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?compression????????:???none?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?options????????????:???allocate-inputs,?allocate-outputs????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?optimization???????:???balanced?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?target/series??????:???generic??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?workspace?dir??????:???C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace465785871649500151581099545474794????????????????????????????????????????????????????????????????????????
?output?dir?????????:???C:\Users\py_hp\.stm32cubemx\network_output???????????????????????????????????????????????????????????????????????????????????????????????????????????????
?model_fmt??????????:???float????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?model_name?????????:???har_IGN??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?model_hash?????????:???ff0080dbe395a3d8fd3f63243d2326d5?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?params?#???????????:???3,064?items?(11.97?KiB)??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?input?1/1??????????:???'input_0'?(domain:activations/**default**)???????????????????????????????????????????????????????????????????????????????????????????????????????????????
????????????????????:???72?items,?288?B,?ai_float,?float,?(1,24,3,1)?????????????????????????????????????????????????????????????????????????????????????????????????????????????
?output?1/1?????????:???'dense_2'?(domain:activations/**default**)???????????????????????????????????????????????????????????????????????????????????????????????????????????????
????????????????????:???4?items,?16?B,?ai_float,?float,?(1,1,1,4)????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?macc???????????????:???14,404???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?weights?(ro)???????:???12,256?B?(11.97?KiB)?(1?segment)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?activations?(rw)???:???2,016?B?(1.97?KiB)?(1?segment)?*?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?ram?(total)????????:???2,016?B?(1.97?KiB)?=?2,016?+?0?+?0???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?(*)?'input'/'output'?buffers?can?be?used?from?the?activations?buffer
?
?Model?name?-?har_IGN?['input_0']?['dense_2']
?------------------------------------------------------------------------------------------------------
?id???layer?(original)?????????????????oshape??????????????????param/size?????macc?????connected?to???
?------------------------------------------------------------------------------------------------------
?0????input_0?(None)???????????????????[b:None,h:24,w:3,c:1]??????????????????????????????????????????
??????conv2d_1_conv2d?(Conv2D)?????????[b:None,h:9,w:3,c:24]???408/1,632??????10,392???input_0????????
??????conv2d_1?(Conv2D)????????????????[b:None,h:9,w:3,c:24]??????????????????648??????conv2d_1_conv2d
?------------------------------------------------------------------------------------------------------
?1????max_pooling2d_1?(MaxPooling2D)???[b:None,h:3,w:3,c:24]??????????????????648??????conv2d_1???????
?------------------------------------------------------------------------------------------------------
?2????flatten_1?(Flatten)??????????????[b:None,c:216]??????????????????????????????????max_pooling2d_1
?------------------------------------------------------------------------------------------------------
?3????dense_1_dense?(Dense)????????????[b:None,c:12]???????????2,604/10,416???2,604????flatten_1??????
?------------------------------------------------------------------------------------------------------
?5????dense_2_dense?(Dense)????????????[b:None,c:4]????????????52/208?????????52???????dense_1_dense??
??????dense_2?(Dense)??????????????????[b:None,c:4]???????????????????????????60???????dense_2_dense??
?------------------------------------------------------------------------------------------------------
?model/c-model:?macc=14,404/14,404??weights=12,256/12,256??activations=--/2,016?io=--/0
?
?Number?of?operations?per?c-layer
?-----------------------------------------------------------------------------------
?c_id????m_id???name?(type)??????????????????????????#op?(type)????????????????????
?-----------------------------------------------------------------------------------
?0???????1??????conv2d_1_conv2d?(optimized_conv2d)????????????11,688?(smul_f32_f32)
?1???????3??????dense_1_dense?(dense)??????????????????????????2,604?(smul_f32_f32)
?2???????5??????dense_2_dense?(dense)?????????????????????????????52?(smul_f32_f32)
?3???????5??????dense_2?(nl)??????????????????????????????????????60?(op_f32_f32)??
?-----------------------------------------------------------------------------------
?total????????????????????????????????????????????????????????14,404???????????????
?
???Number?of?operation?types
???---------------------------------------------
???smul_f32_f32??????????????14,344???????99.6%
???op_f32_f32????????????????????60????????0.4%
?
?Complexity?report?(model)
?------------------------------------------------------------------------------------
?m_id???name??????????????c_macc????????????????????c_rom?????????????????????c_id??
?------------------------------------------------------------------------------------
?1??????max_pooling2d_1???||||||||||||||||??81.1%???|||???????????????13.3%???[0]???
?3??????dense_1_dense?????||||??????????????18.1%???||||||||||||||||??85.0%???[1]???
?5??????dense_2_dense?????|??????????????????0.8%???|??????????????????1.7%???[2,?3]
?------------------------------------------------------------------------------------
?macc=14,404?weights=12,256?act=2,016?ram_io=0
Creating?txt?report?file?C:\Users\py_hp\.stm32cubemx\network_output\har_ign_analyze_report.txt
elapsed?time?(analyze):?7.692s
Getting?Flash?and?Ram?size?used?by?the?library
Model?file:??????har_IGN.h5
Total?Flash:?????29880?B?(29.18?KiB)
????Weights:?????12256?B?(11.97?KiB)
????Library:?????17624?B?(17.21?KiB)
Total?Ram:???????4000?B?(3.91?KiB)
????Activations:?2016?B?(1.97?KiB)
????Library:?????1984?B?(1.94?KiB)
????Input:???????288?B?(included?in?Activations)
????Output:??????16?B?(included?in?Activations)
Done
Analyze complete on AI model
? ? ? ? ?點(diǎn)擊PC桌面驗(yàn)證按鈕(validation on desktop),對訓(xùn)練模型進(jìn)行驗(yàn)證,主要是驗(yàn)證原始模型和轉(zhuǎn)為c語言支持的模型時(shí),驗(yàn)證前后計(jì)算資源、模型精度等差異情況,驗(yàn)證數(shù)據(jù)就是我們剛指定的testx.npy、testy.npy文件。
Starting?AI?validation?on?desktop?with?custom?dataset?:?D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy...
D:/workForSoftware/STM32CubeMX/Repository/Packs/STMicroelectronics/X-CUBE-AI/7.3.0/Utilities/windows/stm32ai?validate?--name?har_ign?-m?D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training?Scripts/HAR/results/2023_Jan_11_17_50_03/har_IGN.h5?--type?keras?--compression?none?--verbosity?1?--workspace?C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048?--output?C:\Users\py_hp\.stm32cubemx\network_output?--allocate-inputs?--allocate-outputs?--valoutput?D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training?Scripts/HAR/results/2023_Jan_24_14_40_13/testy.npy?--valinput?D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training?Scripts/HAR/results/2023_Jan_24_14_40_13/testx.npy?
Neural?Network?Tools?for?STM32AI?v1.6.0?(STM.ai?v7.3.0-RC5)
Copying?the?AI?runtime?files?to?the?user?workspace:?C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048\inspector_har_ign\workspace
?
?Exec/report?summary?(validate)
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?model?file?????????:???D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_11_17_50_03\har_IGN.h5
?type???????????????:???keras????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?c_name?????????????:???har_ign??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?compression????????:???none?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?options????????????:???allocate-inputs,?allocate-outputs????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?optimization???????:???balanced?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?target/series??????:???generic??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?workspace?dir??????:???C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048??????????????????????????????????????????????????????????????????????
?output?dir?????????:???C:\Users\py_hp\.stm32cubemx\network_output???????????????????????????????????????????????????????????????????????????????????????????????????????????????
?vinput?files???????:???D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy?
?voutput?files??????:???D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_24_14_40_13\testy.npy?
?model_fmt??????????:???float????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?model_name?????????:???har_IGN??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?model_hash?????????:???ff0080dbe395a3d8fd3f63243d2326d5?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?params?#???????????:???3,064?items?(11.97?KiB)??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?input?1/1??????????:???'input_0'?(domain:activations/**default**)???????????????????????????????????????????????????????????????????????????????????????????????????????????????
????????????????????:???72?items,?288?B,?ai_float,?float,?(1,24,3,1)?????????????????????????????????????????????????????????????????????????????????????????????????????????????
?output?1/1?????????:???'dense_2'?(domain:activations/**default**)???????????????????????????????????????????????????????????????????????????????????????????????????????????????
????????????????????:???4?items,?16?B,?ai_float,?float,?(1,1,1,4)????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?macc???????????????:???14,404???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?weights?(ro)???????:???12,256?B?(11.97?KiB)?(1?segment)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?activations?(rw)???:???2,016?B?(1.97?KiB)?(1?segment)?*?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?ram?(total)????????:???2,016?B?(1.97?KiB)?=?2,016?+?0?+?0???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????
?---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
?(*)?'input'/'output'?buffers?can?be?used?from?the?activations?buffer
Setting?validation?data...
?loading?file:?D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy
?-?samples?are?reshaped:?(12806,?24,?3,?1)?->?(12806,?24,?3,?1)
?loading?file:?D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training?Scripts\HAR\results\2023_Jan_24_14_40_13\testy.npy
?-?samples?are?reshaped:?(12806,?4)?->?(12806,?1,?1,?4)
?I[1]:?(12806,?24,?3,?1)/float32,?min/max=[-26.319,?32.844],?mean/std=[0.075,?5.034],?input_0
?O[1]:?(12806,?1,?1,?4)/float32,?min/max=[0.000,?1.000],?mean/std=[0.250,?0.433],?dense_2
Running?the?STM?AI?c-model?(AI?RUNNER)...(name=har_ign,?mode=x86)
?X86?shared?lib?(C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048\inspector_har_ign\workspace\lib\libai_har_ign.dll)?['har_ign']
?Summary?"har_ign"?-?['har_ign']
?--------------------------------------------------------------------------------
?inputs/outputs???????:?1/1
?input_1??????????????:?(1,24,3,1),?float32,?288?bytes,?in?activations?buffer
?output_1?????????????:?(1,1,1,4),?float32,?16?bytes,?in?activations?buffer
?n_nodes??????????????:?4
?compile_datetime?????:?Jan?25?2023?22:55:51?(Wed?Jan?25?22:55:47?2023)
?activations??????????:?2016
?weights??????????????:?12256
?macc?????????????????:?14404
?--------------------------------------------------------------------------------
?runtime??????????????:?STM.AI?7.3.0?(Tools?7.3.0)
?capabilities?????????:?['IO_ONLY',?'PER_LAYER',?'PER_LAYER_WITH_DATA']
?device???????????????:?AMD64?Intel64?Family?6?Model?158?Stepping?9,?GenuineIntel?(Windows)
?--------------------------------------------------------------------------------
STM.IO:???0%|??????????|?0/12806?[00:00<?,??it/s]
STM.IO:??11%|█?????????|?1424/12806?[00:00<00:00,?14136.31it/s]
STM.IO:??14%|█▍????????|?1849/12806?[00:00<00:04,?2293.62it/s]?
STM.IO:??17%|█▋????????|?2170/12806?[00:00<00:05,?1774.74it/s]
STM.IO:??19%|█▉????????|?2429/12806?[00:01<00:06,?1520.26it/s]
STM.IO:??21%|██????????|?2645/12806?[00:01<00:07,?1348.30it/s]
STM.IO:??22%|██▏???????|?2828/12806?[00:01<00:07,?1291.67it/s]
STM.IO:??23%|██▎???????|?2992/12806?[00:01<00:07,?1245.52it/s]
STM.IO:??25%|██▍???????|?3141/12806?[00:01<00:08,?1194.67it/s]
STM.IO:??26%|██▌???????|?3278/12806?[00:01<00:08,?1107.82it/s]
STM.IO:??27%|██▋???????|?3407/12806?[00:02<00:08,?1154.55it/s]
STM.IO:??28%|██▊???????|?3548/12806?[00:02<00:07,?1218.70it/s]
STM.IO:??29%|██▊???????|?3678/12806?[00:02<00:07,?1175.60it/s]
STM.IO:??30%|██▉???????|?3811/12806?[00:02<00:07,?1215.67it/s]
STM.IO:??31%|███???????|?3938/12806?[00:02<00:07,?1139.58it/s]
STM.IO:??32%|███▏??????|?4075/12806?[00:02<00:07,?1197.83it/s]
STM.IO:??33%|███▎??????|?4199/12806?[00:02<00:07,?1207.70it/s]
STM.IO:??34%|███▍??????|?4323/12806?[00:02<00:07,?1078.59it/s]
STM.IO:??35%|███▍??????|?4451/12806?[00:02<00:07,?1129.92it/s]
STM.IO:??36%|███▌??????|?4590/12806?[00:03<00:06,?1194.76it/s]
STM.IO:??37%|███▋??????|?4718/12806?[00:03<00:06,?1216.59it/s]
STM.IO:??38%|███▊??????|?4843/12806?[00:03<00:06,?1195.77it/s]
STM.IO:??39%|███▉??????|?4965/12806?[00:03<00:06,?1159.48it/s]
STM.IO:??40%|███▉??????|?5083/12806?[00:03<00:06,?1116.81it/s]
STM.IO:??41%|████??????|?5197/12806?[00:03<00:06,?1095.57it/s]
STM.IO:??41%|████▏?????|?5308/12806?[00:03<00:06,?1078.25it/s]
STM.IO:??42%|████▏?????|?5433/12806?[00:03<00:06,?1122.47it/s]
STM.IO:??43%|████▎?????|?5547/12806?[00:03<00:06,?1056.59it/s]
STM.IO:??44%|████▍?????|?5655/12806?[00:04<00:06,?1055.01it/s]
STM.IO:??45%|████▍?????|?5762/12806?[00:04<00:06,?1035.74it/s]
STM.IO:??46%|████▌?????|?5867/12806?[00:04<00:06,?1022.60it/s]
STM.IO:??47%|████▋?????|?5981/12806?[00:04<00:06,?1053.06it/s]
STM.IO:??48%|████▊?????|?6098/12806?[00:04<00:06,?1083.31it/s]
STM.IO:??48%|████▊?????|?6208/12806?[00:04<00:06,?1025.35it/s]
STM.IO:??49%|████▉?????|?6312/12806?[00:04<00:06,?952.27it/s]?
STM.IO:??50%|█████?????|?6410/12806?[00:04<00:07,?910.42it/s]
STM.IO:??51%|█████?????|?6509/12806?[00:04<00:06,?930.92it/s]
STM.IO:??52%|█████▏????|?6620/12806?[00:04<00:06,?976.37it/s]
STM.IO:??52%|█████▏????|?6720/12806?[00:05<00:06,?926.81it/s]
STM.IO:??53%|█████▎????|?6818/12806?[00:05<00:06,?940.17it/s]
STM.IO:??54%|█████▍????|?6914/12806?[00:05<00:06,?930.36it/s]
STM.IO:??55%|█████▍????|?7008/12806?[00:05<00:06,?852.84it/s]
STM.IO:??55%|█████▌????|?7106/12806?[00:05<00:06,?885.63it/s]
STM.IO:??56%|█████▌????|?7197/12806?[00:05<00:06,?805.83it/s]
STM.IO:??57%|█████▋????|?7299/12806?[00:05<00:06,?858.49it/s]
STM.IO:??58%|█████▊????|?7388/12806?[00:05<00:07,?744.49it/s]
STM.IO:??58%|█████▊????|?7473/12806?[00:06<00:07,?755.34it/s]
STM.IO:??59%|█████▉????|?7560/12806?[00:06<00:06,?785.88it/s]
STM.IO:??60%|█████▉????|?7642/12806?[00:06<00:06,?782.78it/s]
STM.IO:??60%|██████????|?7723/12806?[00:06<00:06,?768.90it/s]
STM.IO:??61%|██████????|?7825/12806?[00:06<00:06,?828.66it/s]
STM.IO:??62%|██████▏???|?7937/12806?[00:06<00:05,?897.30it/s]
STM.IO:??63%|██████▎???|?8033/12806?[00:06<00:05,?913.23it/s]
STM.IO:??63%|██████▎???|?8127/12806?[00:06<00:05,?913.79it/s]
STM.IO:??64%|██████▍???|?8254/12806?[00:06<00:04,?994.44it/s]
STM.IO:??65%|██████▌???|?8358/12806?[00:06<00:04,?1005.50it/s]
STM.IO:??66%|██████▌???|?8466/12806?[00:07<00:04,?1024.62it/s]
STM.IO:??67%|██████▋???|?8579/12806?[00:07<00:04,?1052.03it/s]
STM.IO:??68%|██████▊???|?8712/12806?[00:07<00:03,?1111.93it/s]
STM.IO:??69%|██████▉???|?8826/12806?[00:07<00:03,?1044.19it/s]
STM.IO:??70%|██████▉???|?8933/12806?[00:07<00:03,?1005.29it/s]
STM.IO:??71%|███████???|?9036/12806?[00:07<00:03,?1010.21it/s]
STM.IO:??71%|███████▏??|?9150/12806?[00:07<00:03,?1043.83it/s]
STM.IO:??72%|███████▏??|?9277/12806?[00:07<00:03,?1100.57it/s]
STM.IO:??73%|███████▎??|?9404/12806?[00:07<00:02,?1144.16it/s]
STM.IO:??74%|███████▍??|?9521/12806?[00:08<00:02,?1135.98it/s]
STM.IO:??75%|███████▌??|?9648/12806?[00:08<00:02,?1170.75it/s]
STM.IO:??76%|███████▋??|?9780/12806?[00:08<00:02,?1209.41it/s]
STM.IO:??77%|███████▋??|?9903/12806?[00:08<00:02,?1184.92it/s]
STM.IO:??78%|███████▊??|?10032/12806?[00:08<00:02,?1212.12it/s]
STM.IO:??79%|███████▉??|?10155/12806?[00:08<00:02,?1214.79it/s]
STM.IO:??80%|████████??|?10278/12806?[00:08<00:02,?1096.01it/s]
STM.IO:??81%|████████??|?10391/12806?[00:08<00:02,?1100.40it/s]
STM.IO:??82%|████████▏?|?10506/12806?[00:08<00:02,?1112.34it/s]
STM.IO:??83%|████████▎?|?10619/12806?[00:09<00:02,?1035.66it/s]
STM.IO:??84%|████████▎?|?10725/12806?[00:09<00:02,?914.43it/s]?
STM.IO:??84%|████████▍?|?10821/12806?[00:09<00:02,?889.74it/s]
STM.IO:??85%|████████▌?|?10920/12806?[00:09<00:02,?915.76it/s]
STM.IO:??86%|████████▌?|?11014/12806?[00:09<00:02,?819.91it/s]
STM.IO:??87%|████████▋?|?11100/12806?[00:09<00:02,?738.28it/s]
STM.IO:??87%|████████▋?|?11178/12806?[00:09<00:02,?740.24it/s]
STM.IO:??88%|████████▊?|?11255/12806?[00:09<00:02,?657.58it/s]
STM.IO:??89%|████████▊?|?11364/12806?[00:10<00:02,?702.16it/s]
STM.IO:??89%|████████▉?|?11455/12806?[00:10<00:01,?752.49it/s]
STM.IO:??90%|█████████?|?11548/12806?[00:10<00:01,?794.66it/s]
STM.IO:??91%|█████████?|?11631/12806?[00:10<00:01,?796.56it/s]
STM.IO:??92%|█████████▏|?11748/12806?[00:10<00:01,?879.46it/s]
STM.IO:??93%|█████████▎|?11853/12806?[00:10<00:01,?922.73it/s]
STM.IO:??93%|█████████▎|?11949/12806?[00:10<00:00,?895.23it/s]
STM.IO:??94%|█████████▍|?12049/12806?[00:10<00:00,?922.41it/s]
STM.IO:??95%|█████████▍|?12163/12806?[00:10<00:00,?976.60it/s]
STM.IO:??96%|█████████▌|?12280/12806?[00:10<00:00,?1025.50it/s]
STM.IO:??97%|█████████▋|?12412/12806?[00:11<00:00,?1096.80it/s]
STM.IO:??98%|█████████▊|?12525/12806?[00:11<00:00,?1072.91it/s]
STM.IO:??99%|█████████▉|?12663/12806?[00:11<00:00,?1147.57it/s]
STM.IO:?100%|█████████▉|?12781/12806?[00:11<00:00,?1118.51it/s]
?Results?for?12806?inference(s)?-?average?per?inference
??device??????????????:?AMD64?Intel64?Family?6?Model?158?Stepping?9,?GenuineIntel?(Windows)
??duration????????????:?0.057ms
??c_nodes?????????????:?4
?c_id??m_id??desc????????????????output???????????????????ms??????????%
?-------------------------------------------------------------------------------
?0?????1?????Conv2dPool?(0x109)??(1,3,3,24)/float32/864B???????0.049???86.5%
?1?????3?????Dense?(0x104)???????(1,1,1,12)/float32/48B????????0.005????9.1%
?2?????5?????Dense?(0x104)???????(1,1,1,4)/float32/16B?????????0.001????1.8%
?3?????5?????NL?(0x107)??????????(1,1,1,4)/float32/16B?????????0.001????2.5%
?-------------------------------------------------------------------------------
???????????????????????????????????????????????????????????????0.057?ms
?NOTE:?duration?and?exec?time?per?layer?is?just?an?indication.?They?are?dependent?of?the?HOST-machine?work-load.
Running?the?Keras?model...
Saving?validation?data...
?output?directory:?C:\Users\py_hp\.stm32cubemx\network_output
?creating?C:\Users\py_hp\.stm32cubemx\network_output\har_ign_val_io.npz
?m_outputs_1:?(12806,?1,?1,?4)/float32,?min/max=[0.000,?1.000],?mean/std=[0.250,?0.376],?dense_2
?c_outputs_1:?(12806,?1,?1,?4)/float32,?min/max=[0.000,?1.000],?mean/std=[0.250,?0.376],?dense_2
Computing?the?metrics...
?Accuracy?report?#1?for?the?generated?x86?C-model
?----------------------------------------------------------------------------------------------------
?notes:?-?computed?against?the?provided?ground?truth?values
????????-?12806?samples?(4?items?per?sample)
??acc=86.72%,?rmse=0.224433631,?mae=0.096160948,?l2r=0.496649474,?nse=73.14%
??4?classes?(12806?samples)
??----------------------------
??C0??????3678???.???62???41
??C1????????.??1124??14????.
??C2???????254??10??1806??662
??C3???????66????.???592?4497
?Accuracy?report?#1?for?the?reference?model
?----------------------------------------------------------------------------------------------------
?notes:?-?computed?against?the?provided?ground?truth?values
????????-?12806?samples?(4?items?per?sample)
??acc=86.72%,?rmse=0.224433631,?mae=0.096160948,?l2r=0.496649474,?nse=73.14%
??4?classes?(12806?samples)
??----------------------------
??C0??????3678???.???62???41
??C1????????.??1124??14????.
??C2???????254??10??1806??662
??C3???????66????.???592?4497
?Cross?accuracy?report?#1?(reference?vs?C-model)
?----------------------------------------------------------------------------------------------------
?notes:?-?the?output?of?the?reference?model?is?used?as?ground?truth/reference?value
????????-?12806?samples?(4?items?per?sample)
??acc=100.00%,?rmse=0.000000063,?mae=0.000000024,?l2r=0.000000139,?nse=100.00%
??4?classes?(12806?samples)
??----------------------------
??C0??????3998???.????.????.
??C1????????.??1134???.????.
??C2????????.????.??2474???.
??C3????????.????.????.??5200
?
?Evaluation?report?(summary)
?----------------------------------------------------------------------------------------------------------------------------------------------------------
?Output??????????????acc???????rmse??????????mae???????????l2r???????????mean???????????std???????????nse???????????tensor????????????????????????????????
?----------------------------------------------------------------------------------------------------------------------------------------------------------
?x86?c-model?#1??????86.72%????0.224433631???0.096160948???0.496649474???-0.000000000???0.224435821???0.731362987???dense_2,?ai_float,?(1,1,1,4),?m_id=[5]
?original?model?#1???86.72%????0.224433631???0.096160948???0.496649474???-0.000000001???0.224435821???0.731362987???dense_2,?ai_float,?(1,1,1,4),?m_id=[5]
?X-cross?#1??????????100.00%???0.000000063???0.000000024???0.000000139???0.000000000????0.000000063???1.000000000???dense_2,?ai_float,?(1,1,1,4),?m_id=[5]
?----------------------------------------------------------------------------------------------------------------------------------------------------------
?
??rmse?:?Root?Mean?Squared?Error
??mae??:?Mean?Absolute?Error
??l2r??:?L2?relative?error
??nse??:?Nash-Sutcliffe?efficiency?criteria
Creating?txt?report?file?C:\Users\py_hp\.stm32cubemx\network_output\har_ign_validate_report.txt
elapsed?time?(validate):?26.458s
Validation
? ? ? ? ?6.4 c語言神經(jīng)網(wǎng)絡(luò)模型生成及源碼輸出
????????將開發(fā)板重新選擇ST-LINK連接(5-6跳線帽拔出,插入1-2跳線中)
????????為了后續(xù)源碼講解方便,只生產(chǎn)c語言的神經(jīng)網(wǎng)絡(luò)模型源碼,不輸出應(yīng)用示例程序(有個(gè)弊端就是在新建程序加載到開發(fā)板后,validation on target功能無法使用),如下圖所示。
? ? ? ? ?配置輸出工程
?七、c語言神經(jīng)網(wǎng)絡(luò)模型使用
? ? ? ?7.1 C語言神經(jīng)網(wǎng)絡(luò)模型源文件
????????在cubeMX配置神經(jīng)網(wǎng)絡(luò)模型時(shí),指明了名稱是har_ign,會生成如下文件har_ign.h/c、har_ign_data.h/c、har_ign_data_params.h/c、har_ign_config.h這些源碼文件就是轉(zhuǎn)換后的c語言神經(jīng)網(wǎng)絡(luò)模型,提供了一系列的API,這些API通過調(diào)用cube.AI軟件包的內(nèi)置功能,工程實(shí)現(xiàn)了神經(jīng)網(wǎng)絡(luò)計(jì)算功能:
? ? ? ? ? ? ? ? ?其中har_ign_generate_report.txt文件是生成c語言神經(jīng)網(wǎng)絡(luò)模型的過程記錄。
? ? ? ?7.2 串口功能自定義實(shí)現(xiàn)
????????由于本文沒有選擇生成配套的應(yīng)用程序代碼,因此串口功能還需要自己實(shí)現(xiàn),因此我移植了串口功能代碼,在工程目錄下,創(chuàng)建了ICore源目錄,并創(chuàng)建print、usart子目錄,分別在兩個(gè)子目錄加入print.h/c和usart.h/c源碼。
?????????print.h如下:
#ifndef INC_RETARGET_H_
#define INC_RETARGET_H_
#include "stm32l4xx_hal.h"
#include "stdio.h"http://用于printf函數(shù)串口重映射
#include <sys/stat.h>
void ResetPrintInit(UART_HandleTypeDef *huart);
int _isatty(int fd);
int _write(int fd, char* ptr, int len);
int _close(int fd);
int _lseek(int fd, int ptr, int dir);
int _read(int fd, char* ptr, int len);
int _fstat(int fd, struct stat* st);
#endif /* INC_RETARGET_H_ */
????????print.c如下:
#include <_ansi.h>
#include <_syslist.h>
#include <errno.h>
#include <sys/time.h>
#include <sys/times.h>
#include <limits.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include "print.h"
#if !defined(OS_USE_SEMIHOSTING)
#define STDIN_FILENO 0
#define STDOUT_FILENO 1
#define STDERR_FILENO 2
UART_HandleTypeDef *gHuart;
void ResetPrintInit(UART_HandleTypeDef *huart) {
gHuart = huart;
/* Disable I/O buffering for STDOUT stream, so that
* chars are sent out as soon as they are printed. */
setvbuf(stdout, NULL, _IONBF, 0);
}
int _isatty(int fd) {
if (fd >= STDIN_FILENO && fd <= STDERR_FILENO)
return 1;
errno = EBADF;
return 0;
}
int _write(int fd, char* ptr, int len) {
HAL_StatusTypeDef hstatus;
if (fd == STDOUT_FILENO || fd == STDERR_FILENO) {
hstatus = HAL_UART_Transmit(gHuart, (uint8_t *) ptr, len, HAL_MAX_DELAY);
if (hstatus == HAL_OK)
return len;
else
return EIO;
}
errno = EBADF;
return -1;
}
int _close(int fd) {
if (fd >= STDIN_FILENO && fd <= STDERR_FILENO)
return 0;
errno = EBADF;
return -1;
}
int _lseek(int fd, int ptr, int dir) {
(void) fd;
(void) ptr;
(void) dir;
errno = EBADF;
return -1;
}
int _read(int fd, char* ptr, int len) {
HAL_StatusTypeDef hstatus;
if (fd == STDIN_FILENO) {
hstatus = HAL_UART_Receive(gHuart, (uint8_t *) ptr, 1, HAL_MAX_DELAY);
if (hstatus == HAL_OK)
return 1;
else
return EIO;
}
errno = EBADF;
return -1;
}
int _fstat(int fd, struct stat* st) {
if (fd >= STDIN_FILENO && fd <= STDERR_FILENO) {
st->st_mode = S_IFCHR;
return 0;
}
errno = EBADF;
return 0;
}
#endif //#if !defined(OS_USE_SEMIHOSTING)
? ? ? ? usart.h
#ifndef INC_USART_H_
#define INC_USART_H_
#include "stm32l4xx_hal.h" //HAL庫文件聲明
#include <string.h>//用于字符串處理的庫
#include "../print/print.h"http://用于printf函數(shù)串口重映射
extern UART_HandleTypeDef huart1;//聲明LPUSART的HAL庫結(jié)構(gòu)體
#define USART_REC_LEN 256//定義LPUSART最大接收字節(jié)數(shù)
extern uint8_t USART_RX_BUF[USART_REC_LEN];//接收緩沖,最大USART_REC_LEN個(gè)字節(jié).末字節(jié)為換行符
extern uint16_t USART_RX_STA;//接收狀態(tài)標(biāo)記
extern uint8_t USART_NewData;//當(dāng)前串口中斷接收的1個(gè)字節(jié)數(shù)據(jù)的緩存
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart);//串口中斷回調(diào)函數(shù)聲明
#endif /* INC_USART_H_ */
? ? ? ? usart.c如下:
#include "usart.h"
uint8_t USART_RX_BUF[USART_REC_LEN];//接收緩沖,最大USART_REC_LEN個(gè)字節(jié).末字節(jié)為換行符
/*
* bit15:接收到回車(0x0d)時(shí)設(shè)置HLPUSART_RX_STA|=0x8000;
* bit14:接收溢出標(biāo)志,數(shù)據(jù)超出緩存長度時(shí),設(shè)置HLPUSART_RX_STA|=0x4000;
* bit13:預(yù)留
* bit12:預(yù)留
* bit11~0:接收到的有效字節(jié)數(shù)目(0~4095)
*/
uint16_t USART_RX_STA=0;接收狀態(tài)標(biāo)記//bit15:接收完成標(biāo)志,bit14:接收到回車(0x0d),bit13~0:接收到的有效字節(jié)數(shù)目
uint8_t USART_NewData;//當(dāng)前串口中斷接收的1個(gè)字節(jié)數(shù)據(jù)的緩存
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)//串口中斷回調(diào)函數(shù)
{
if(huart ==&huart1)//判斷中斷來源(串口1:USB轉(zhuǎn)串口)
{
if(USART_NewData==0x0d){//回車標(biāo)記
USART_RX_STA|=0x8000;//標(biāo)記接到回車
}else{
if((USART_RX_STA&0X0FFF)<USART_REC_LEN){
USART_RX_BUF[USART_RX_STA&0X0FFF]=USART_NewData; //將收到的數(shù)據(jù)放入數(shù)組
USART_RX_STA++; //數(shù)據(jù)長度計(jì)數(shù)加1
}else{
USART_RX_STA|=0x4000;//數(shù)據(jù)超出緩存長度,標(biāo)記溢出
}
}
HAL_UART_Receive_IT(&huart1,(uint8_t *)&USART_NewData,1); //再開啟接收中斷
}
}
? ? ? ? 7.3 c語言神經(jīng)網(wǎng)絡(luò)模型API使用
????????先不管底層機(jī)理,本文給下面代碼,看看如何實(shí)現(xiàn)這些API調(diào)用的,在main.c文件中,通過aiInit函數(shù),實(shí)現(xiàn)har_ign模型初始化,并打印模型相關(guān)信息。在主函數(shù)循環(huán)體中,通過串口輸入信息,獲得數(shù)據(jù)生成因子,調(diào)用acquire_and_process_data進(jìn)行輸入數(shù)據(jù)生成,然后調(diào)用aiRun,并傳入生成數(shù)據(jù)及輸出緩存,進(jìn)行神經(jīng)網(wǎng)絡(luò)模型調(diào)用。然后調(diào)用post_process打印輸出信息。
/* USER CODE END Header */
/* Includes ------------------------------------------------------------------*/
#include "main.h"
#include "crc.h"
#include "dfsdm.h"
#include "i2c.h"
#include "quadspi.h"
#include "spi.h"
#include "usart.h"
#include "usb_otg.h"
#include "gpio.h"
/* Private includes ----------------------------------------------------------*/
/* USER CODE BEGIN Includes */
#include "../../ICore/print/print.h"
#include "../../ICore/usart/usart.h"
#include "../../X-CUBE-AI/app/har_ign.h"
#include "../../X-CUBE-AI/app/har_ign_data.h"
/* USER CODE END Includes */
/* Private typedef -----------------------------------------------------------*/
/* USER CODE BEGIN PTD */
/* USER CODE END PTD */
/* Private define ------------------------------------------------------------*/
/* USER CODE BEGIN PD */
/* USER CODE END PD */
/* Private macro -------------------------------------------------------------*/
/* USER CODE BEGIN PM */
/* USER CODE END PM */
/* Private variables ---------------------------------------------------------*/
/* USER CODE BEGIN PV */
/* USER CODE END PV */
/* Private function prototypes -----------------------------------------------*/
void SystemClock_Config(void);
/* USER CODE BEGIN PFP */
/* USER CODE END PFP */
/* Private user code ---------------------------------------------------------*/
/* USER CODE BEGIN 0 */
/* Global handle to reference the instantiated C-model */
static ai_handle network = AI_HANDLE_NULL;
/* Global c-array to handle the activations buffer */
AI_ALIGNED(32)
static ai_u8 activations[AI_HAR_IGN_DATA_ACTIVATIONS_SIZE];
/* Array to store the data of the input tensor */
AI_ALIGNED(32)
static ai_float in_data[AI_HAR_IGN_IN_1_SIZE];
/* or static ai_u8 in_data[AI_HAR_IGN_IN_1_SIZE_BYTES]; */
/* c-array to store the data of the output tensor */
AI_ALIGNED(32)
static ai_float out_data[AI_HAR_IGN_OUT_1_SIZE];
/* static ai_u8 out_data[AI_HAR_IGN_OUT_1_SIZE_BYTES]; */
/* Array of pointer to manage the model's input/output tensors */
static ai_buffer *ai_input;
static ai_buffer *ai_output;
static ai_buffer_format fmt_input;
static ai_buffer_format fmt_output;
void buf_print(void)
{
printf("in_data:");
for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)
{
printf("%f ",((ai_float*)in_data)[i]);
}
printf("\n");
printf("out_data:");
for (int i=0; i<AI_HAR_IGN_OUT_1_SIZE; i++)
{
printf("%f ",((ai_float*)out_data)[i]);
}
printf("\n");
}
void aiPrintBufInfo(const ai_buffer *buffer)
{
printf("(%lu, %lu, %lu, %lu)", AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_BATCH),
AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_HEIGHT),
AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_WIDTH),
AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_CHANNEL));
printf(" buffer_size:%d ", (int)AI_BUFFER_SIZE(buffer));
}
void aiPrintDataType(const ai_buffer_format fmt)
{
if (AI_BUFFER_FMT_GET_TYPE(fmt) == AI_BUFFER_FMT_TYPE_FLOAT)
printf("float%d ", (int)AI_BUFFER_FMT_GET_BITS(fmt));
else if (AI_BUFFER_FMT_GET_TYPE(fmt) == AI_BUFFER_FMT_TYPE_BOOL) {
printf("bool%d ", (int)AI_BUFFER_FMT_GET_BITS(fmt));
} else { /* integer type */
printf("%s%d ", AI_BUFFER_FMT_GET_SIGN(fmt)?"i":"u",
(int)AI_BUFFER_FMT_GET_BITS(fmt));
}
}
void aiPrintDataInfo(const ai_buffer *buffer,const ai_buffer_format fmt)
{
if (buffer->data)
printf(" @0x%X/%d \n",
(int)buffer->data,
(int)AI_BUFFER_BYTE_SIZE(AI_BUFFER_SIZE(buffer), fmt)
);
else
printf(" (User Domain)/%d \n",
(int)AI_BUFFER_BYTE_SIZE(AI_BUFFER_SIZE(buffer), fmt)
);
}
void aiPrintNetworkInfo(const ai_network_report report)
{
printf("Model name : %s\n", report.model_name);
printf(" model signature : %s\n", report.model_signature);
printf(" model datetime : %s\r\n", report.model_datetime);
printf(" compile datetime : %s\r\n", report.compile_datetime);
printf(" runtime version : %d.%d.%d\r\n",
report.runtime_version.major,
report.runtime_version.minor,
report.runtime_version.micro);
if (report.tool_revision[0])
printf(" Tool revision : %s\r\n", (report.tool_revision[0])?report.tool_revision:"");
printf(" tools version : %d.%d.%d\r\n",
report.tool_version.major,
report.tool_version.minor,
report.tool_version.micro);
printf(" complexity : %lu MACC\r\n", (unsigned long)report.n_macc);
printf(" c-nodes : %d\r\n", (int)report.n_nodes);
printf(" map_activations : %d\r\n", report.map_activations.size);
for (int idx=0; idx<report.map_activations.size;idx++) {
const ai_buffer *buffer = &report.map_activations.buffer[idx];
printf(" [%d] ", idx);
aiPrintBufInfo(buffer);
printf("\r\n");
}
printf(" map_weights : %d\r\n", report.map_weights.size);
for (int idx=0; idx<report.map_weights.size;idx++) {
const ai_buffer *buffer = &report.map_weights.buffer[idx];
printf(" [%d] ", idx);
aiPrintBufInfo(buffer);
printf("\r\n");
}
}
/*
* Bootstrap
*/
int aiInit(void) {
ai_error err;
/* Create and initialize the c-model */
const ai_handle acts[] = { activations };
err = ai_har_ign_create_and_init(&network, acts, NULL);
if (err.type != AI_ERROR_NONE) {
printf("ai_error_type:%d,ai_error_code:%d\r\n",err.type,err.code);
};
ai_network_report report;
if (ai_har_ign_get_report(network, &report) != true) {
printf("ai get report error\n");
return -1;
}
aiPrintNetworkInfo(report);
/* Reteive pointers to the model's input/output tensors */
ai_input = ai_har_ign_inputs_get(network, NULL);
ai_output = ai_har_ign_outputs_get(network, NULL);
//
fmt_input = AI_BUFFER_FORMAT(ai_input);
fmt_output = AI_BUFFER_FORMAT(ai_output);
printf(" n_inputs/n_outputs : %u/%u\r\n", report.n_inputs,
report.n_outputs);
printf("input :");
aiPrintBufInfo(ai_input);
aiPrintDataType(fmt_input);
aiPrintDataInfo(ai_input, fmt_input);
//
printf("output :");
aiPrintBufInfo(ai_output);
aiPrintDataType(fmt_output);
aiPrintDataInfo(ai_output, fmt_output);
return 0;
}
int acquire_and_process_data(void *in_data,int factor)
{
printf("in_data:");
for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)
{
switch(i%3){
case 0:
((ai_float*)in_data)[i] = -175+(ai_float)(i*factor*1.2)/10.0;
break;
case 1:
((ai_float*)in_data)[i] = 50+(ai_float)(i*factor*0.6)/100.0;
break;
case 2:
((ai_float*)in_data)[i] = 975-(ai_float)(i*factor*1.8)/100.0;
break;
default:
break;
}
printf("%f ",((ai_float*)in_data)[i]);
}
printf("\n");
return 0;
}
/*
* Run inference
*/
int aiRun(const void *in_data, void *out_data) {
ai_i32 n_batch;
ai_error err;
/* 1 - Update IO handlers with the data payload */
ai_input[0].data = AI_HANDLE_PTR(in_data);
ai_output[0].data = AI_HANDLE_PTR(out_data);
/* 2 - Perform the inference */
n_batch = ai_har_ign_run(network, &ai_input[0], &ai_output[0]);
if (n_batch != 1) {
err = ai_har_ign_get_error(network);
printf("ai_error_type:%d,ai_error_code:%d\r\n",err.type,err.code);
};
return 0;
}
int post_process(void *out_data)
{
printf("out_data:");
for (int i=0; i<AI_HAR_IGN_OUT_1_SIZE; i++)
{
printf("%f ",((ai_float*)out_data)[i]);
}
printf("\n");
return 0;
}
/* USER CODE END 0 */
/**
* @brief The application entry point.
* @retval int
*/
int main(void)
{
/* USER CODE BEGIN 1 */
/* USER CODE END 1 */
/* MCU Configuration--------------------------------------------------------*/
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
HAL_Init();
/* USER CODE BEGIN Init */
/* USER CODE END Init */
/* Configure the system clock */
SystemClock_Config();
/* USER CODE BEGIN SysInit */
/* USER CODE END SysInit */
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_DFSDM1_Init();
MX_I2C2_Init();
MX_QUADSPI_Init();
MX_SPI3_Init();
MX_USART1_UART_Init();
MX_USART3_UART_Init();
MX_USB_OTG_FS_PCD_Init();
MX_CRC_Init();
/* USER CODE BEGIN 2 */
ResetPrintInit(&huart1);
HAL_UART_Receive_IT(&huart1,(uint8_t *)&USART_NewData, 1); //再開啟接收中斷
USART_RX_STA = 0;
aiInit();
uint8_t factor = 1;
buf_print();
/* USER CODE END 2 */
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
if(USART_RX_STA&0xC000){//溢出或換行,重新開始
printf("uart1:%.*s\r\n",USART_RX_STA&0X0FFF, USART_RX_BUF);
if(strstr((const char*)USART_RX_BUF,(const char*)"test"))
{
factor = ((uint8_t)USART_RX_BUF[4]-0x30);
printf("factor:%d\n",factor);
acquire_and_process_data(in_data,factor);
aiRun(in_data, out_data);
post_process(out_data);
}
USART_RX_STA=0;//接收錯(cuò)誤,重新開始
HAL_Delay(100);//等待
}
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
}
/* USER CODE END 3 */
}
//其他生產(chǎn)代碼
.......
? ? ? ?7.4 編譯及程序運(yùn)行測試
????????配置工程輸出文件格式支持,并設(shè)置運(yùn)行配置:
? ? ? ? ?編譯及下載程序:
?? ? ? ? 打開串口助手,查看日志輸出,發(fā)送信息,例如test7,即7作為因子生成輸入數(shù)據(jù),然后看輸出結(jié)果。
? ? ? ? ?7.5 補(bǔ)充說明
????????目前只能說是采用cubeIDE+cube.AI+keras的STM32嵌入式人工智能開發(fā)走通了流程,但是串口反饋回來的日志信息是不合理的,因?yàn)樵跀?shù)據(jù)采集時(shí)我們只采集了傳感器的三個(gè)數(shù)值,但在訓(xùn)練模型時(shí),默認(rèn)的數(shù)據(jù)輸入量是24,顯然是不合理的,因此需要還需要重新分析官方提供的HAR訓(xùn)練模型的項(xiàng)目,使得模型訓(xùn)練與采集數(shù)據(jù)匹配起來,請閱讀篇二。
文章來源:http://www.zghlxwxcb.cn/news/detail-816039.html
? ? ? ? ?但考慮到官方提供的HAR訓(xùn)練模型的工程項(xiàng)目還是過于復(fù)雜,不助于學(xué)習(xí)和了解cube.AI的真正用法,因此后面將拋棄官方提供的HAR訓(xùn)練模型的項(xiàng)目,自行撰寫一個(gè)訓(xùn)練模型項(xiàng)目+實(shí)際采集數(shù)據(jù)生成神經(jīng)網(wǎng)絡(luò)模型,是的數(shù)據(jù)輸入和輸出匹配,并將采用傳感器實(shí)時(shí)采集到的數(shù)據(jù)進(jìn)行計(jì)算評估,請閱讀偏三。? ?文章來源地址http://www.zghlxwxcb.cn/news/detail-816039.html
到了這里,關(guān)于STM32CubeIDE開發(fā)(三十一), stm32人工智能開發(fā)應(yīng)用實(shí)踐(Cube.AI).篇一的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!