2021-11-12

Example Sentences:

  • Yong Zhuang: A huge challenge for engineers and researchers in the fields of data mining and Machine Learning (ML) is high-dimensional data analysis. Feature selection offers a simple yet effective way to overcome this challenge by eliminating redundant and irrelevant data. Removing the irrelevant data improves learning accuracy, reduces the computation time, and facilitates an enhanced understanding for the learning model or data.source

  • Hefei Qiu: As Hindus have from time immemorial, we join you in embracing this festival of lights and renewing the hope that it kindles. We come together, in peace, to reaffirm our bonds with each other, and seek divine inspiration. Diwali is known as the festival of lights, but its ethos is about celebrating the victory of good over evil, light over darkness, knowledge over ignorance and love over hate. While we recognize that the historical significance is celebrated differently throughout India, the essence of Diwali is a universal human expression of hope, unity, and peace.

  • Zihan Li: These methods have the stability and reliability of trust-region methods but are much simpler to implement, requiring only few lines of code change to a vanilla policy gradient implementation, applicable in more general settings (for example, when using a joint architecture for the policy and value function), and have better overall performance. source

Before & After:

  • Hefei Qiu
    • before: Compared with SBERT models, CLSR also shows solid improvement. Such an improvement is actually more significant as we only train the CLSR model using the sentences with entailment relation which are roughly 1/3 of the NLI data, and SBERT fine-tunes the BERT models on the same NLI dataset using all the training data and label information.

    • after: Compared with SBERT models, CLSR also shows solid improvement. It would be reasonable to infer such an improvement could be more significant as we use just about 1/3 of the Natural Language Inference data that SBERT is trained on.

  • Tianyu Kang
    • before: We experiment this on Cifar-10 with VGG19, and the result in table 3 shows within 250 epochs, the model training with shuffled labels have 52% less skill average active rate.

    • after: Besides the theoretical guarantees, we empirically validate our approach on Cifar-10 with VGG19, and the result in table 3 shows within 250 epochs. The model training with shuffled labels have 52% less skill average active rate.

  • Yong Zhuang
    • before: Therefore, two different considerations must be taken into account in this work. The first is identifying small subsets of the most related and meaningful features from the original Spatio-temporal feature space. Second, the complex space-time dynamics and sensitive dependence on initial and boundary conditions.

    • after: Therefore, this work takes two different considerations into account: one is about how to identify the most relevant and meaningful features from the original Spatio-temporal feature space; the other is about how to model complex space-time dynamics with sensitive dependence on initial and boundary conditions.

  • Zihan Li
    • before: Unlike traditional symbolic regression methods, our algorithm can extract the mathematics expression of equations from a sequence of lower-level atomic units by learning the symbolic neural.

    • after: The proposed algorithm, comparing to the traditional symbolic regression methods, has better performance in stability and reliability, either in extracting the mathematics expressions from a sequence of lower-level atomic units or in step-by-step interpretative inferring process.