关键词: Cross-domain sequential recommendation Graph neural network Recommendation system Self-attention mechanism

来  源:   DOI:10.1016/j.neunet.2024.106488

Abstract:
The objective of cross-domain sequential recommendation is to forecast upcoming interactions by leveraging past interactions across diverse domains. Most methods aim to utilize single-domain and cross-domain information as much as possible for personalized preference extraction and effective integration. However, on one hand, most models ignore that cross-domain information is composed of multiple single-domains when generating representations. They still treat cross-domain information the same way as single-domain information, resulting in noisy representation generation. Only by imposing certain constraints on cross-domain information during representation generation can subsequent models minimize interference when considering user preferences. On the other hand, some methods neglect the joint consideration of users\' long-term and short-term preferences and reduce the weight of cross-domain user preferences to minimize noise interference. To better consider the mutual promotion of cross-domain and single-domains factors, we propose a novel model (C2DREIF) that utilizes Gaussian graph encoders to handle information, effectively constraining the correlation of information and capturing useful contextual information more accurately. It also employs a Top-down transformer to accurately extract user intents within each domain, taking into account the user\'s long-term and short-term preferences. Additionally, entropy regularized is applied to enhance contrastive learning and mitigate the impact of randomness caused by negative sample composition.
摘要:
跨域顺序推荐的目标是通过利用跨不同域的过去交互来预测即将到来的交互。大多数方法都旨在尽可能利用单域和跨域信息进行个性化偏好提取和有效整合。然而,一方面,大多数模型在生成表示时忽略跨域信息由多个单域组成。他们仍然像对待单域信息一样对待跨域信息,导致嘈杂的表示生成。只有通过在表示生成期间对跨域信息施加某些约束,后续模型才能在考虑用户偏好时最小化干扰。另一方面,有些方法忽略了用户长期和短期偏好的共同考虑,降低了跨域用户偏好的权重,以最大程度地减少噪声干扰。为了更好地考虑跨域和单域因素的相互促进,我们提出了一种利用高斯图编码器来处理信息的新模型(C2DREIF),有效地约束信息的相关性,更准确地捕获有用的上下文信息。它还使用Top-down转换器来准确提取每个域中的用户意图,考虑到用户的长期和短期偏好。此外,熵正则化应用于增强对比学习,减轻负样本组成对随机性的影响。
公众号