2022-06-17

Before & After:

  • Chengjie Zheng
    • before: For the current research, video-based contrastive learning models have not yet explored the effect of explicitly encouraging features to differ in the temporal dimension. We propose a new temporal contrastive learning framework (TimeCLR), consisting of two novel frame selections, to improve existing contrast self-supervised video representation learning methods. Spatial-temporal interval frame selection adds the task of distinguishing non-overlapping clips from the same video. Spatial-temporal segment frame selection aims to differentiate the time steps of the feature maps of the input clips to increase the temporal diversity of the learned features. Our proposed framework TimeCLR achieves significant improvement in video understanding tasks, which helps social scientists monitor and study laboratory animals’ social behavior in the research of quantifying the pain.
    • after:
  • Tianyu Kang
    • before: According to the natural design of Neural Networks, they have a limitation that the number of output nodes has to be equal to the number of clusters/classes. But in real examples, with a limited deep Neural Network, samples in the same clusters/classes may not perfectly gather at certain mean points. Thus here we proposed a new framework of Neural Network which can automatically discover feasible mean points in clusters/classes, which makes it able to solve the problem with a less deep Neural Network. For example, a typical XOR problem, which asks Neural Network to perform as an XOR gate, needs three layers. But with our framework, two layers are enough.

    • after: