infoveillance

信息监控
  • 文章类型: Journal Article
    背景:冠状病毒病(COVID-19)大流行通过阿片类药物治疗计划(OTP)影响了接受美沙酮维持治疗(MMT)的患者,特别是由于护理模式的独特挑战。以前,记录病人在紧急情况下的经历通常是在事实发生几年后,部分原因是实时存在大量数据空白。方法:我们提取了308个在r/美沙酮上提到COVID-19关键词的帖子,一个在线社区,让接受MMT的患者分享信息,在Reddit上发生在2020年1月31日至2020年9月30日之间。这些帖子中有215个自我报告对其MMT的影响。采用定性内容分析,我们对这些帖子中描述的影响进行了表征,并确定了四个紧急主题,描述了患者在COVID-19期间对MMT的影响体验。结果:主题包括(1)54.4%的帖子报告阻碍了使用美沙酮,(2)28.4%报告访问物理OTP的障碍,(3)19.5%的人报告必须自我管理他们的护理,和(4)4.7%报告阻碍接触OTP提供者和工作人员。结论:患者描述了一刀切的政策的意外后果,这些政策不均匀地应用,导致剂量欠佳。在OTP获得COVID-19的感知风险增加,减少与OTP提供商和员工的互动。虽然初步,这些结果对于OTP患者的随访监测指标以及该在线社区的数字介导资源需求具有重要意义.这项研究是如何在紧急情况期间和之后使用社交媒体来听取患者的生活经历以进行知情的紧急情况准备和响应的模型。
    Background: The coronavirus disease (COVID-19) pandemic has impacted patients receiving methadone maintenance treatment (MMT) through opioid treatment programs (OTPs), especially because of the unique challenges of the care delivery model. Previously, documentation of patient experiences during emergencies often comes years after the fact, in part because there is a substantial data void in real-time. Methods: We extracted 308 posts that mention COVID-19 keywords on r/methadone, an online community for patients receiving MMT to share information, on Reddit occurring between January 31, 2020 and September 30, 2020. 215 of these posts self-report an impact to their MMT. Using qualitative content analysis, we characterized the impacts described in these posts and identified four emergent themes describing patients\' experience of impacts to MMT during COVID-19. Results: The themes included (1) 54.4% of posts reporting impediments to accessing their methadone, (2) 28.4% reporting impediments to accessing physicial OTPs, (3) 19.5% reporting having to self-manage their care, and (4) 4.7% reporting impediments to accessing OTP providers and staff. Conclusions: Patients described unanticipated consequences to one-size-fits-all policies that are unevenly applied resulting in suboptimal dosing, increased perceived risk of acquiring COVID-19 at OTPs, and reduced interaction with OTP providers and staff. While preliminary, these results are formative for follow-up surveillance metrics for patients of OTPs as well as digitally-mediated resource needs for this online community. This study serves as a model of how social media can be employed during and after emergencies to hear the lived experiences of patients for informed emergency preparedness and response.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    UNASSIGNED: Social media chatter in 2020 has been largely dominated by the COVID-19 pandemic. Existing research shows that COVID-19 discourse is highly politicized, with political preferences linked to beliefs and disbeliefs about the virus. As it happens with topics that become politicized, people may fall into echo chambers, which is the idea that one is only presented with information they already agree with, thereby reinforcing one\'s confirmation bias. Understanding the relationship between information dissemination and political preference is crucial for effective public health communication.
    UNASSIGNED: We aimed to study the extent of polarization and examine the structure of echo chambers related to COVID-19 discourse on Twitter in the United States.
    UNASSIGNED: First, we presented Retweet-BERT, a scalable and highly accurate model for estimating user polarity by leveraging language features and network structures. Then, by analyzing the user polarity predicted by Retweet-BERT, we provided new insights into the characterization of partisan users.
    UNASSIGNED: We observed that right-leaning users were noticeably more vocal and active in the production and consumption of COVID-19 information. We also found that most of the highly influential users were partisan, which may contribute to further polarization. Importantly, while echo chambers exist in both the right- and left-leaning communities, the right-leaning community was by far more densely connected within their echo chamber and isolated from the rest.
    UNASSIGNED: We provided empirical evidence that political echo chambers are prevalent, especially in the right-leaning community, which can exacerbate the exposure to information in line with pre-existing users\' views. Our findings have broader implications in developing effective public health campaigns and promoting the circulation of factual information online.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    The wide adoption of social media in daily life renders it a rich and effective resource for conducting near real-time assessments of consumers\' perceptions of health services. However, its use in these assessments can be challenging because of the vast amount of data and the diversity of content in social media chatter.
    This study aims to develop and evaluate an automatic system involving natural language processing and machine learning to automatically characterize user-posted Twitter data about health services using Medicaid, the single largest source of health coverage in the United States, as an example.
    We collected data from Twitter in two ways: via the public streaming application programming interface using Medicaid-related keywords (Corpus 1) and by using the website\'s search option for tweets mentioning agency-specific handles (Corpus 2). We manually labeled a sample of tweets in 5 predetermined categories or other and artificially increased the number of training posts from specific low-frequency categories. Using the manually labeled data, we trained and evaluated several supervised learning algorithms, including support vector machine, random forest (RF), naïve Bayes, shallow neural network (NN), k-nearest neighbor, bidirectional long short-term memory, and bidirectional encoder representations from transformers (BERT). We then applied the best-performing classifier to the collected tweets for postclassification analyses to assess the utility of our methods.
    We manually annotated 11,379 tweets (Corpus 1: 9179; Corpus 2: 2200) and used 7930 (69.7%) for training, 1449 (12.7%) for validation, and 2000 (17.6%) for testing. A classifier based on BERT obtained the highest accuracies (81.7%, Corpus 1; 80.7%, Corpus 2) and F1 scores on consumer feedback (0.58, Corpus 1; 0.90, Corpus 2), outperforming the second best classifiers in terms of accuracy (74.6%, RF on Corpus 1; 69.4%, RF on Corpus 2) and F1 score on consumer feedback (0.44, NN on Corpus 1; 0.82, RF on Corpus 2). Postclassification analyses revealed differing intercorpora distributions of tweet categories, with political (400778/628411, 63.78%) and consumer feedback (15073/27337, 55.14%) tweets being the most frequent for Corpus 1 and Corpus 2, respectively.
    The broad and variable content of Medicaid-related tweets necessitates automatic categorization to identify topic-relevant posts. Our proposed system presents a feasible solution for automatic categorization and can be deployed and generalized for health service programs other than Medicaid. Annotated data and methods are available for future studies.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    由SARS-CoV-2引起的COVID-19导致了全球大流行。世界卫生组织还宣布了一个流行病(即,大量关于COVID-19的信息,其中包含互联网上传播的虚假和准确信息)。因此,测试在线共享信息的真实性并分析与大流行有关的公民讨论主题的演变已变得至关重要。
    本研究分析了关于COVID-19的公众话语。它描述了四个亚洲国家的风险传播模式,这些国家的疫情严重程度不同:韩国,伊朗,越南,和印度。
    我们在2020年1月至3月疾病爆发的早期阶段从四个亚洲国家收集了有关COVID-19的推文。数据集是通过每种语言的相关关键词收集的,正如当地人所建议的。我们提出了一种基于自然语言处理的无监督方式自动提取时间-主题内聚关系的方法。提取的主题根据其语义含义进行定性评估。
    这项研究发现,每个政府的官方疫情阶段与每日推文数量所代表的公众关注程度并不完全一致。受问题-注意力周期理论的启发,提出的自然语言处理模型可以识别公民讨论主题中有意义的过渡阶段。分析显示,推文数量与主题多样性之间存在反比关系。
    本文比较了亚洲国家与流行病相关的社交媒体话语的异同。我们在所有国家的每日推文计数中观察到多个突出的高峰,表明多个问题-注意力周期。我们的分析确定了公众关注的主题;其中一些主题与错误信息和仇恨言论有关。这些发现和快速确定关键主题的能力可以使全球努力在大流行期间与信息流行病作斗争。
    COVID-19, caused by SARS-CoV-2, has led to a global pandemic. The World Health Organization has also declared an infodemic (ie, a plethora of information regarding COVID-19 containing both false and accurate information circulated on the internet). Hence, it has become critical to test the veracity of information shared online and analyze the evolution of discussed topics among citizens related to the pandemic.
    This research analyzes the public discourse on COVID-19. It characterizes risk communication patterns in four Asian countries with outbreaks at varying degrees of severity: South Korea, Iran, Vietnam, and India.
    We collected tweets on COVID-19 from four Asian countries in the early phase of the disease outbreak from January to March 2020. The data set was collected by relevant keywords in each language, as suggested by locals. We present a method to automatically extract a time-topic cohesive relationship in an unsupervised fashion based on natural language processing. The extracted topics were evaluated qualitatively based on their semantic meanings.
    This research found that each government\'s official phases of the epidemic were not well aligned with the degree of public attention represented by the daily tweet counts. Inspired by the issue-attention cycle theory, the presented natural language processing model can identify meaningful transition phases in the discussed topics among citizens. The analysis revealed an inverse relationship between the tweet count and topic diversity.
    This paper compares similarities and differences of pandemic-related social media discourse in Asian countries. We observed multiple prominent peaks in the daily tweet counts across all countries, indicating multiple issue-attention cycles. Our analysis identified which topics the public concentrated on; some of these topics were related to misinformation and hate speech. These findings and the ability to quickly identify key topics can empower global efforts to fight against an infodemic during a pandemic.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    Social media are considered promising and viable sources of data for gaining insights into various disease conditions and patients\' attitudes, behaviors, and medications. They can be used to recognize communication and behavioral themes of problematic use of prescription drugs. However, mining and analyzing social media data have challenges and limitations related to topic deduction and data quality. As a result, we need a structured approach to analyze social media content related to drug abuse in a manner that can mitigate the challenges and limitations surrounding the use of such data.
    This study aimed to develop and evaluate a framework for mining and analyzing social media content related to drug abuse. The framework is designed to mitigate challenges and limitations related to topic deduction and data quality in social media data analytics for drug abuse.
    The proposed framework started with defining different terms related to the keywords, categories, and characteristics of the topic of interest. We then used the Crimson Hexagon platform to collect data based on a search query informed by a drug abuse ontology developed using the identified terms. We subsequently preprocessed the data and examined the quality using an evaluation matrix. Finally, a suitable data analysis approach could be used to analyze the collected data.
    The framework was evaluated using the opioid epidemic as a drug abuse case analysis. We demonstrated the applicability of the proposed framework to identify public concerns toward the opioid epidemic and the most discussed topics on social media related to opioids. The results from the case analysis showed that the framework could improve the discovery and identification of topics in social media domains characterized by a plethora of highly diverse terms and lack of a commonly available dictionary or language by the community, such as in the case of opioid and drug abuse.
    The proposed framework addressed the challenges related to topic detection and data quality. We demonstrated the applicability of the proposed framework to identify the common concerns toward the opioid epidemic and the most discussed topics on social media related to opioids.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    及时分配冠状病毒病(COVID-19)的医疗资源需要及早发现区域性疫情。互联网浏览数据可能预测当地人群的病例暴发,但尚未得到证实。
    我们调查了搜索引擎查询模式是否可以帮助预测美国州和大都市级别的COVID-19病例率。
    我们使用了《纽约时报》的地区确诊病例数据和来自50个州和166个县指定市场区域(DMA)的Google趋势结果。我们确定了其活动在全国范围内与确诊病例率相关的搜索词。我们使用单变量回归来构建基于最佳拟合搜索查询偏移时间滞后的复合解释变量。我们用状态和DMA水平的样本外病例率数据测量了解释变量的原始和z变换的皮尔逊相关性和均方根误差(RMSE)。
    预测与该州的确诊病例率高度相关(平均r=0.69,95%CI0.51-0.81;中位数RMSE1.27,IQR1.48)和DMA水平(平均r=0.51,95%CI0.39-0.61;中位数RMSE4.38,IQR1.80),使用在确诊病例率前10天可用的搜索数据。它们符合50个州中的49个州和166DMA中的103个州的病例率活动,显著性水平为0.05。
    搜索查询活动中的可识别模式可能有助于预测新出现的COVID-19区域性暴发,尽管它们仍然容易受到搜索强度随机变化的影响。
    Timely allocation of medical resources for coronavirus disease (COVID-19) requires early detection of regional outbreaks. Internet browsing data may predict case outbreaks in local populations that are yet to be confirmed.
    We investigated whether search-engine query patterns can help to predict COVID-19 case rates at the state and metropolitan area levels in the United States.
    We used regional confirmed case data from the New York Times and Google Trends results from 50 states and 166 county-based designated market areas (DMA). We identified search terms whose activity precedes and correlates with confirmed case rates at the national level. We used univariate regression to construct a composite explanatory variable based on best-fitting search queries offset by temporal lags. We measured the raw and z-transformed Pearson correlation and root-mean-square error (RMSE) of the explanatory variable with out-of-sample case rate data at the state and DMA levels.
    Predictions were highly correlated with confirmed case rates at the state (mean r=0.69, 95% CI 0.51-0.81; median RMSE 1.27, IQR 1.48) and DMA levels (mean r=0.51, 95% CI 0.39-0.61; median RMSE 4.38, IQR 1.80), using search data available up to 10 days prior to confirmed case rates. They fit case-rate activity in 49 of 50 states and in 103 of 166 DMA at a significance level of .05.
    Identifiable patterns in search query activity may help to predict emerging regional outbreaks of COVID-19, although they remain vulnerable to stochastic changes in search intensity.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    背景:冠状病毒病(COVID-19)已影响全球200多个国家和地区。这种疾病对公共卫生系统构成了极大的挑战,因为筛查和监测能力往往受到严重限制,特别是在爆发开始的时候;这可能会助长爆发,因为许多患者会在不知不觉中感染其他人。
    目的:本研究的目的是收集和分析微博上与COVID-19相关的帖子,在中国流行的类似Twitter的社交媒体网站。据我们所知,这项信息监控研究采用了最大的,最全面的,以及迄今为止最精细的社交媒体数据来预测中国大陆的COVID-19病例数。
    方法:我们建立了一个拥有2.5亿人口的微博用户池,大约是整个每月活跃微博用户人口的一半。使用167个关键字的综合列表,从2019年11月1日至2020年3月31日,我们从用户池中检索并分析了约1500万条与COVID-19相关的帖子。我们开发了一个机器学习分类器来识别“生病的帖子,其中用户报告自己或其他人与COVID-19相关的症状和诊断。使用官方报告的病例数作为结果,然后,我们根据每日病例数估计了病员和其他COVID-19病员的Granger因果关系。对于带有地理标记的帖子的子集(占所有检索帖子的3.10%),我们还运行了湖北省的独立预测模型,最初爆发的震中,以及中国大陆其他地区。
    结果:我们发现,关于COVID-19症状和诊断的报告显着预测了每日病例数,比官方统计数据提前了14天,而其他COVID-19帖子没有相似的预测能力。对于带有地理标记的帖子的子集,我们发现,无论医疗资源的不平等分配和爆发时间,湖北省和中国大陆其他地区的预测模式都成立。
    结论:可以有效地利用公共社交媒体数据来预测感染病例并及时做出反应。研究人员和疾病控制机构应密切关注有关COVID-19的社交媒体信息领域。除了监控整体搜索和发布活动之外,利用机器学习方法和对信息共享行为的理论理解是识别真实疾病信号并提高信息监测有效性的一种有前途的方法。
    BACKGROUND: Coronavirus disease (COVID-19) has affected more than 200 countries and territories worldwide. This disease poses an extraordinary challenge for public health systems because screening and surveillance capacity is often severely limited, especially during the beginning of the outbreak; this can fuel the outbreak, as many patients can unknowingly infect other people.
    OBJECTIVE: The aim of this study was to collect and analyze posts related to COVID-19 on Weibo, a popular Twitter-like social media site in China. To our knowledge, this infoveillance study employs the largest, most comprehensive, and most fine-grained social media data to date to predict COVID-19 case counts in mainland China.
    METHODS: We built a Weibo user pool of 250 million people, approximately half the entire monthly active Weibo user population. Using a comprehensive list of 167 keywords, we retrieved and analyzed around 15 million COVID-19-related posts from our user pool from November 1, 2019 to March 31, 2020. We developed a machine learning classifier to identify \"sick posts,\" in which users report their own or other people\'s symptoms and diagnoses related to COVID-19. Using officially reported case counts as the outcome, we then estimated the Granger causality of sick posts and other COVID-19 posts on daily case counts. For a subset of geotagged posts (3.10% of all retrieved posts), we also ran separate predictive models for Hubei province, the epicenter of the initial outbreak, and the rest of mainland China.
    RESULTS: We found that reports of symptoms and diagnosis of COVID-19 significantly predicted daily case counts up to 14 days ahead of official statistics, whereas other COVID-19 posts did not have similar predictive power. For the subset of geotagged posts, we found that the predictive pattern held true for both Hubei province and the rest of mainland China regardless of the unequal distribution of health care resources and the outbreak timeline.
    CONCLUSIONS: Public social media data can be usefully harnessed to predict infection cases and inform timely responses. Researchers and disease control agencies should pay close attention to the social media infosphere regarding COVID-19. In addition to monitoring overall search and posting activities, leveraging machine learning approaches and theoretical understanding of information sharing behaviors is a promising approach to identify true disease signals and improve the effectiveness of infoveillance.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    水烟吸烟(HTS)由于其普遍性和有害的健康影响,对于公共卫生专业人员来说是一个特别重要的问题。社交媒体网站可以成为公共卫生官员开展信息健康运动的宝贵工具。当前的社交媒体平台为研究人员提供了更好地识别和定位特定受众甚至个人的机会。然而,我们不知道有系统的研究试图识别对HTS持混合或矛盾观点的受众。
    这项研究的目的是(1)通过利用机器学习技术,使用更大的数据集来确认以前的研究显示,Twitter上的HTS情绪呈正偏斜;(2)系统地识别通过Twitter平台对HTS表现出混合观点的个人,因此代表了干预的关键受众。
    我们前瞻性地收集了2016年1月至6月与HTS相关的推文。我们对大约5000条随机抽样的推文的子集对HTS的情绪进行了双重编码,并使用这些数据来训练机器学习分类器,以评估剩余的大约556,000条与HTS相关的Twitter帖子。自然语言处理软件用于提取语言特征(即,基于语言的协变量)。数据由机器学习工具和算法使用R进行处理。最后,我们使用结果来确定哪些人,因为他们一直在发布正面和负面的内容,可能对HTS存在矛盾,并代表了理想的干预受众。
    有561,960条与HTS相关的推文:373,911条被分类为阳性,183,139条被分类为阴性。一组12,861名用户符合先验标准,表明他们发布了有关HTS的正面和负面推文。
    情感分析可以使研究人员在社交媒体上识别出对关键公共卫生问题表现出歧义的受众群体,如HTS,因此代表了理想的干预人群。使用大型社交媒体数据集可以帮助公共卫生官员先发制人地识别最容易接受有针对性活动的特定受众群体。
    Hookah tobacco smoking (HTS) is a particularly important issue for public health professionals to address owing to its prevalence and deleterious health effects. Social media sites can be a valuable tool for public health officials to conduct informational health campaigns. Current social media platforms provide researchers with opportunities to better identify and target specific audiences and even individuals. However, we are not aware of systematic research attempting to identify audiences with mixed or ambivalent views toward HTS.
    The objective of this study was to (1) confirm previous research showing positively skewed HTS sentiment on Twitter using a larger dataset by leveraging machine learning techniques and (2) systematically identify individuals who exhibit mixed opinions about HTS via the Twitter platform and therefore represent key audiences for intervention.
    We prospectively collected tweets related to HTS from January to June 2016. We double-coded sentiment for a subset of approximately 5000 randomly sampled tweets for sentiment toward HTS and used these data to train a machine learning classifier to assess the remaining approximately 556,000 HTS-related Twitter posts. Natural language processing software was used to extract linguistic features (ie, language-based covariates). The data were processed by machine learning tools and algorithms using R. Finally, we used the results to identify individuals who, because they had consistently posted both positive and negative content, might be ambivalent toward HTS and represent an ideal audience for intervention.
    There were 561,960 HTS-related tweets: 373,911 were classified as positive and 183,139 were classified as negative. A set of 12,861 users met a priori criteria indicating that they posted both positive and negative tweets about HTS.
    Sentiment analysis can allow researchers to identify audience segments on social media that demonstrate ambiguity toward key public health issues, such as HTS, and therefore represent ideal populations for intervention. Using large social media datasets can help public health officials to preemptively identify specific audience segments that would be most receptive to targeted campaigns.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    BACKGROUND: Social media have been increasingly adopted by health agencies to disseminate information, interact with the public, and understand public opinion. Among them, the Centers for Disease Control and Prevention (CDC) is one of the first US government health agencies to adopt social media during health emergencies and crisis. It had been active on Twitter during the 2016 Zika epidemic that caused 5168 domestic noncongenital cases in the United States.
    OBJECTIVE: The aim of this study was to quantify the temporal variabilities in CDC\'s tweeting activities throughout the Zika epidemic, public engagement defined as retweeting and replying, and Zika case counts. It then compares the patterns of these 3 datasets to identify possible discrepancy among domestic Zika case counts, CDC\'s response on Twitter, and public engagement in this topic.
    METHODS: All of the CDC-initiated tweets published in 2016 with corresponding retweets and replies were collected from 67 CDC-associated Twitter accounts. Both univariate and multivariate time series analyses were performed in each quarter of 2016 for domestic Zika case counts, CDC tweeting activities, and public engagement in the CDC-initiated tweets.
    RESULTS: CDC sent out >84.0% (5130/6104) of its Zika tweets in the first quarter of 2016 when Zika case counts were low in the 50 US states and territories (only 560/5168, 10.8% cases and 662/38,885, 1.70% cases, respectively). While Zika case counts increased dramatically in the second and third quarters, CDC efforts on Twitter substantially decreased. The time series of public engagement in the CDC-initiated tweets generally differed among quarters and from that of original CDC tweets based on autoregressive integrated moving average model results. Both original CDC tweets and public engagement had the highest mutual information with Zika case counts in the second quarter. Furthermore, public engagement in the original CDC tweets was substantially correlated with and preceded actual Zika case counts.
    CONCLUSIONS: Considerable discrepancies existed among CDC\'s original tweets regarding Zika, public engagement in these tweets, and actual Zika epidemic. The patterns of these discrepancies also varied between different quarters in 2016. CDC was much more active in the early warning of Zika, especially in the first quarter of 2016. Public engagement in CDC\'s original tweets served as a more prominent predictor of actual Zika epidemic than the number of CDC\'s original tweets later in the year.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    BACKGROUND: Some of the temporal variations and clock-like rhythms that govern several different health-related behaviors can be traced in near real-time with the help of search engine data. This is especially useful when studying phenomena where little or no traditional data exist. One specific area where traditional data are incomplete is the study of diurnal mood variations, or daily changes in individuals\' overall mood state in relation to depression-like symptoms.
    OBJECTIVE: The objective of this exploratory study was to analyze diurnal variations for interest in depression on the Web to discover hourly patterns of depression interest and help seeking.
    METHODS: Hourly query volume data for 6 depression-related queries in Finland were downloaded from Google Trends in March 2017. A continuous wavelet transform (CWT) was applied to the hourly data to focus on the diurnal variation. Longer term trends and noise were also eliminated from the data to extract the diurnal variation for each query term. An analysis of variance was conducted to determine the statistical differences between the distributions of each hour. Data were also trichotomized and analyzed in 3 time blocks to make comparisons between different time periods during the day.
    RESULTS: Search volumes for all depression-related query terms showed a unimodal regular pattern during the 24 hours of the day. All queries feature clear peaks during the nighttime hours around 11 PM to 4 AM and troughs between 5 AM and 10 PM. In the means of the CWT-reconstructed data, the differences in nighttime and daytime interest are evident, with a difference of 37.3 percentage points (pp) for the term \"Depression,\" 33.5 pp for \"Masennustesti,\" 30.6 pp for \"Masennus,\" 12.8 pp for \"Depression test,\" 12.0 pp for \"Masennus testi,\" and 11.8 pp for \"Masennus oireet.\" The trichotomization showed peaks in the first time block (00.00 AM-7.59 AM) for all 6 terms. The search volumes then decreased significantly during the second time block (8.00 AM-3.59 PM) for the terms \"Masennus oireet\" (P<.001), \"Masennus\" (P=.001), \"Depression\" (P=.005), and \"Depression test\" (P=.004). Higher search volumes for the terms \"Masennus\" (P=.14), \"Masennustesti\" (P=.07), and \"Depression test\" (P=.10) were present between the second and third time blocks.
    CONCLUSIONS: Help seeking for depression has clear diurnal patterns, with significant rise in depression-related query volumes toward the evening and night. Thus, search engine query data support the notion of the evening-worse pattern in diurnal mood variation. Information on the timely nature of depression-related interest on an hourly level could improve the chances for early intervention, which is beneficial for positive health outcomes.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号