Smart Ways to Improve Learning in Time-Based Networks

Fri Jul 18 2025
Time-based networks, like social media or online shopping platforms, show how connections change over time. Lately, scientists have focused on creating better models to understand these networks. But there's a catch: they haven't paid much attention to the quality of the "wrong" examples used to train these models. In time-based networks, there are two big problems. First, there are very few "right" examples compared to "wrong" ones at any given time. Second, the "right" examples change over time. To tackle these issues, a new approach called Curriculum Negative Mining (CurNM) was introduced. This method adjusts the difficulty of the "wrong" examples as the model learns. CurNM uses a few smart tricks. It creates a pool of "wrong" examples that mixes different types of examples to handle the lack of "right" ones. It also focuses on recent changes in the network to capture shifting patterns. Finally, it adds some randomness to keep the training stable. Tests on 12 datasets and three different models showed that CurNM works better than other methods. Further experiments confirmed that the approach is useful and reliable.
https://localnews.ai/article/smart-ways-to-improve-learning-in-time-based-networks-3ceceb2e

questions

    Could the focus on negative sampling in TGNNs be a distraction from more sinister issues within the data?
    Is the positive sparsity in temporal networks a deliberate mechanism to control the flow of information?
    How does the positive sparsity in temporal networks impact the performance of TGNNs, and what strategies can be employed to mitigate this issue?

actions