目录
1.1 配置环境1.1.1 代码下载1.1.2 旧版本环境配置
1.2训练集制作步骤1.3训练模型步骤1.4 测试模型步骤1.4.1 修改部分代码1.4.1.1 在/SlowFast/demo/AVA目录下新建ava.json,文件内容如下:1.4.1.2 修改/SlowFast/demo/AVA/SLOWFAST_32x2_R101_50_50.yaml,内容改为如下:1.4.1.3 下载预训练权重文件1.4.1.4 代码运行1.4.1.5 运行结果:
1.1 配置环境
1.1.1 代码下载
官方地址:
git clone https://github.com/facebookresearch/slowfast
上面链接将下载最新版本的代码,但会导致pytorchvideo包与pytorch1.6冲突,这是因为pytorchvideo需求pytorch>=1.8,由于我的GPU设配CUDA版本为10.0,pytorch只安装到1.6,所以报错了,无奈只能使用旧版本的代码, 此代码不需要pytorchvideo的支持.
git clone https://gitee.com/qiang_sun/SlowFast.git
1.1.2 旧版本环境配置
opencv_python==4.5.1.48
detectron2==0.4
torch==1.6.0
fvcore==0.1.5
torchvision==0.7.0
psutil==5.8.0
matplotlib==3.2.0
tqdm==4.60.0
simplejson==3.17.2
av==8.0.3
numpy==1.19.4
scikit_learn==0.24.2
tensorboard==1.15.0
其中detectron2安装执行如下命令,
pip install 'git+https://github.com/facebookresearch/detectron2.git' --user
若有问题,参考https://zhuanlan.zhihu.com/p/106853715
1.2训练集制作步骤
1.3训练模型步骤
1.4 测试模型步骤
1.4.1 修改部分代码
1.4.1.1 在/SlowFast/demo/AVA目录下新建ava.json,文件内容如下:
{"bend/bow (at the waist)": 0, "crawl": 1, "crouch/kneel": 2, "dance": 3, "fall down": 4, "get up": 5, "jump/leap": 6, "lie/sleep": 7, "martial art": 8, "run/jog": 9, "sit": 10, "stand": 11, "swim": 12, "walk": 13, "answer phone": 14, "brush teeth": 15, "carry/hold (an object)": 16, "catch (an object)": 17, "chop": 18, "climb (e.g., a mountain)": 19, "clink glass": 20, "close (e.g., a door, a box)": 21, "cook": 22, "cut": 23, "dig": 24, "dress/put on clothing": 25, "drink": 26, "drive (e.g., a car, a truck)": 27, "eat": 28, "enter": 29, "exit": 30, "extract": 31, "fishing": 32, "hit (an object)": 33, "kick (an object)": 34, "lift/pick up": 35, "listen (e.g., to music)": 36, "open (e.g., a window, a car door)": 37, "paint": 38, "play board game": 39, "play musical instrument": 40, "play with pets": 41, "point to (an object)": 42, "press": 43, "pull (an object)": 44, "push (an object)": 45, "put down": 46, "read": 47, "ride (e.g., a bike, a car, a horse)": 48, "row boat": 49, "sail boat": 50, "shoot": 51, "shovel": 52, "smoke": 53, "stir": 54, "take a photo": 55, "text on/look at a cellphone": 56, "throw": 57, "touch (an object)": 58, "turn (e.g., a screwdriver)": 59, "watch (e.g., TV)": 60, "work on a computer": 61, "write": 62, "fight/hit (a person)": 63, "give/serve (an object) to (a person)": 64, "grab (a person)": 65, "hand clap": 66, "hand shake": 67, "hand wave": 68, "hug (a person)": 69, "kick (a person)": 70, "kiss (a person)": 71, "lift (a person)": 72, "listen to (a person)": 73, "play with kids": 74, "push (another person)": 75, "sing to (e.g., self, a person, a group)": 76, "take (an object) from (a person)": 77, "talk to (e.g., self, a person, a group)": 78, "watch (a person)": 79}
1.4.1.2 修改/SlowFast/demo/AVA/SLOWFAST_32x2_R101_50_50.yaml,内容改为如下:
TRAIN:
ENABLE: False
DATASET: ava
BATCH_SIZE: 16
EVAL_PERIOD: 1
CHECKPOINT_PERIOD: 1
AUTO_RESUME: True
CHECKPOINT_FILE_PATH: "./SlowFast/configs/AVA/c2/SLOWFAST_32x2_R101_50_50.pkl" #path to pretrain model
CHECKPOINT_TYPE: pytorch
DATA:
NUM_FRAMES: 32
SAMPLING_RATE: 2
TRAIN_JITTER_SCALES: [256, 320]
TRAIN_CROP_SIZE: 224
TEST_CROP_SIZE: 256
INPUT_CHANNEL_NUM: [3, 3]
DETECTION:
ENABLE: True
ALIGNED: False
AVA:
BGR: False
DETECTION_SCORE_THRESH: 0.8
TEST_PREDICT_BOX_LISTS: ["person_box_67091280_iou90/ava_detection_val_boxes_and_labels.csv"]
SLOWFAST:
ALPHA: 4
BETA_INV: 8
FUSION_CONV_CHANNEL_RATIO: 2
FUSION_KERNEL_SZ: 5
RESNET:
ZERO_INIT_FINAL_BN: True
WIDTH_PER_GROUP: 64
NUM_GROUPS: 1
DEPTH: 101
TRANS_FUNC: bottleneck_transform
STRIDE_1X1: False
NUM_BLOCK_TEMP_KERNEL: [[3, 3], [4, 4], [6, 6], [3, 3]]
SPATIAL_DILATIONS: [[1, 1], [1, 1], [1, 1], [2, 2]]
SPATIAL_STRIDES: [[1, 1], [2, 2], [2, 2], [1, 1]]
NONLOCAL:
LOCATION: [[[], []], [[], []], [[6, 13, 20], []], [[], []]]
GROUP: [[1, 1], [1, 1], [1, 1], [1, 1]]
INSTANTIATION: dot_product
POOL: [[[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]], [[2, 2, 2], [2, 2, 2]]]
BN:
USE_PRECISE_STATS: False
NUM_BATCHES_PRECISE: 200
SOLVER:
MOMENTUM: 0.9
WEIGHT_DECAY: 1e-7
OPTIMIZING_METHOD: sgd
MODEL:
NUM_CLASSES: 80
ARCH: slowfast
MODEL_NAME: SlowFast
LOSS_FUNC: bce
DROPOUT_RATE: 0.5
HEAD_ACT: sigmoid
TEST:
ENABLE: False
DATASET: ava
BATCH_SIZE: 8
DATA_LOADER:
NUM_WORKERS: 2
PIN_MEMORY: True
NUM_GPUS: 1
NUM_SHARDS: 1
RNG_SEED: 0
OUTPUT_DIR: .
#TENSORBOARD:
# MODEL_VIS:
# TOPK: 2
DEMO:
ENABLE: True
LABEL_FILE_PATH: "./demo/AVA/ava.json"
INPUT_VIDEO: "./Vinput/2.mp4"
OUTPUT_FILE: "./Voutput/1.mp4"
DETECTRON2_CFG: "COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
DETECTRON2_WEIGHTS: https://dl.fbaipublicfiles.com/detectron2/COCO-Detection/faster_rcnn_R_50_FPN_3x/137849458/model_final_280758.pkl
只需修改修改下面这几个即可, 修改后可以将整个文件复制到/SlowFast/demo/AVA/SLOWFAST_32x2_R101_50_50.yaml中,将原有的内容进行替换,因为发现直接下载下来的SLOWFAST_32x2_R101_50_50.yaml文件跟上面列写的参数后面有几个不一样,所以直接替换即可
CHECKPOINT_FILE_PATH: "./SlowFast/configs/AVA/c2/SLOWFAST_32x2_R101_50_50.pkl"
LABEL_FILE_PATH: "./demo/AVA/ava.json"
INPUT_VIDEO: "./Vinput/2.mp4"
OUTPUT_FILE: "./Voutput/1.mp4"
CHECKPOINT_FILE_PATH: 下载的模型,下一步介绍下载网址 LABEL_FILE_PATH:上一步自己创建的json文件,里面包含了所能分的动作类别 INPUT_VIDEO: 输入视频 OUTPUT_FILE: 输出视频
1.4.1.3 下载预训练权重文件
下载链接:Link 此网址中包含了官方训练好的各个数据集的模型,我们可以先下载下图中的第三个链接 下载好的模型,放到相应的文件夹,同上一步CHECKPOINT_FILE_PATH设置的进行对应.
1.4.1.4 代码运行
python3 tools/run_net.py --cfg demo/AVA/SLOWFAST_32x2_R101_50_50.yaml
或者手动运行run_net.py文件,可以手动修改args.cfg_file的值,进行配置文件的设置
1.4.1.5 运行结果: