-
Wei Ding: Science is often hard to read. Most people assume that its difficulties are born out of necessity, out of the extreme complexity of scientific concepts, data and analysis. We argue here that complexity of thought need not lead to impenetrability of expression; we demonstrate a number of rhetorical principles that can produce clarity in communication without oversimplifying scientific issues. The results are substantive, not merely cosmetic: Improving the quality of writing actually improves the quality of thought. source
-
Hefei Qiu: The dominant paradigm in supervised NLP today is learning from examples, where machine learning algorithms are trained using a large set of taskspecific input-output pairs. In contrast, humans learn to perform the same task by reading a description, after which they are able to perform the task in a zero-shot manner—indeed, this is how crowd-sourced NLP datasets are constructed. source
-
Yong Zhuang: The second piece of intuition, which has captured the imagination of machine learning and neuroscience alike, is that deep neural networks can disentangle highly curved manifolds in input space into flattened manifolds in hidden space.
-
Tianyu Kang: Data is playing an especially critical role in enabling computers to interpret images as compositions of objects, an achievement that humans can do effortlessly while it has been elusive for machines so far.
-
Olga Andreeva: We present a functional aspect of our approach, in which we frame the co-optimization of the neural parameters and structures into a functional optimization in the space of distributions of the neuron weights, and show that our splitting strategy can be viewed as a second-order descent for escaping saddle points in the ∞-Wasserstein space of distributions, while the standard parametric gradient descent corresponds to a first-order descent in the same space. source
-
Chengjie Zheng: We demonstrate performance comparable to strong models such as ResNet-50 and ViT when training on ImageNet for classification; competitive performance on the AudioSet sound event classification benchmark (using raw audio, video, or both); and strong performance relative to comparable approaches on ModelNet-40 point cloud classification source
-
Zihan Li: Shown in Fig1(c-d), as label noise (the percentage of inaccurately labeled samples among all labeled samples) increases, all the compared methods have performance degradations as expected, where the degradation from RS3C is the minimum, demonstrating the robustness of the proposed algorithm.