{Reference Type}: Journal Article {Title}: Hybrid multimodal fusion for graph learning in disease prediction. {Author}: Wang R;Guo W;Wang Y;Zhou X;Leung JC;Yan S;Cui L; {Journal}: Methods {Volume}: 229 {Issue}: 0 {Year}: 2024 Jun 14 {Factor}: 4.647 {DOI}: 10.1016/j.ymeth.2024.06.003 {Abstract}: Graph neural networks (GNNs) have gained significant attention in disease prediction where the latent embeddings of patients are modeled as nodes and the similarities among patients are represented through edges. The graph structure, which determines how information is aggregated and propagated, plays a crucial role in graph learning. Recent approaches typically create graphs based on patients' latent embeddings, which may not accurately reflect their real-world closeness. Our analysis reveals that raw data, such as demographic attributes and laboratory results, offers a wealth of information for assessing patient similarities and can serve as a compensatory measure for graphs constructed exclusively from latent embeddings. In this study, we first construct adaptive graphs from both latent representations and raw data respectively, and then merge these graphs via weighted summation. Given that the graphs may contain extraneous and noisy connections, we apply degree-sensitive edge pruning and kNN sparsification techniques to selectively sparsify and prune these edges. We conducted intensive experiments on two diagnostic prediction datasets, and the results demonstrate that our proposed method surpasses current state-of-the-art techniques.