资源说明:3-D convolutional neural networks (3-D-convNets)
have been very recently proposed for action recognition in
videos, and promising results are achieved. However, existing 3-
D-convNets has two “artificial” requirements that may reduce the
quality of video analysis: 1) It requires a fixed-sized (e.g., 112×112)
input video; and 2)most of the 3-D-convNets require a fixed-length
input (i.e., video shots with fixed number of frames). To tackle
these issues, we propose an end-to-end pipeline named Two-stream
3-D-convNet Fusion, which can recognize human actions in videos
of arbitrary size and length using multiple features. Specifically,
we decompose a video into spatial and temporal shots. By taking
a sequence of shots as input, each stream is implemented using
a spatial temporal pyramid pooling (STPP) convNet with a long
short-term memory (LSTM) or CNN-E model, softmax scores of
which are combined by a late fusion.We devise the STPP convNet to
extract equal-dimensional descriptions for each variable-size shot,
andwe adopt theLSTM/CNN-Emodel to learn a global description
for the input video using these time-varying descriptions. With
these advantages, our method should improve all 3-D CNN-based
video analysis methods. We empirically evaluate our method for
action recognition in videos and the experimental results show that
our method outperforms the state-of-the-art methods (both 2-D
and 3-D based) on three standard benchmark datasets (UCF101,
HMDB51 and ACT datasets).
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。