2021-09-02

  • Yong Zhuang: During inference the model can generate the full sequence y given X by generating one token at a time, and advancing time by one step.
  • Tianyu Kang: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models.
  • Zihan Li: While the noisy labels and outliers could impact the semi-supervised learning in different ways such as overfitting, inaccurate similarity measurement as well as skewed density estimations, they share one common attribute: the corresponding corrupted sample tends to stay far away from intrinsic underlying graph structure guided by label information.
  • Patrick Flynn: We show that our technique reconstructs the strange attractors of synthetic and real-world systems better than existing techniques, and that it creates consistent, predictive representations of even stochastic systems.
  • Chengjie Zheng: It’s important to consider that any novel technique for manipulating images should be developed and applied responsibly, as it could be misused to produce fake or misleading information.
  • Wei Ding: NIST is looking for input from a diverse set of industry and research experts so we would love to get your perspective on this. The final compendium will be publicly available, so your help and resources will reach everyone who access it, and it will support current research efforts – hopefully including your own.