Computer neural networks

计算机神经网络
  • 文章类型: Journal Article
    背景:关于COVID-19的研究很多,但关于其对戊型肝炎的影响却很少。我们旨在评估COVID-19对策对戊型肝炎发病模式的影响,并探讨时间序列模型在分析该模式中的应用。
    方法:我们的关键想法是将COVID-19爆发前的模型与COVID-19爆发前的数据进行拟合,并使用预测值与实际值之间的偏差来反映COVID-19对策的效果。我们分析了2013-2018年中国戊型肝炎的发病模式。我们在COVID-19爆发前评估了3种方法的拟合和预测能力。此外,我们采用这些方法构建了COVID-19前的发病率模型,并将COVID-19后的预测与现实进行了比较.
    结果:在COVID-19爆发之前,中国戊型肝炎发病模式总体呈固定和季节性,在三月的高峰,十月的低谷,冬季和春季的水平高于夏季和秋季,每年。然而,来自前COVID-19模型的后COVID-19预测在截面上与现实截然不同,但在其他时期则一致。
    结论:自COVID-19大流行以来,中国戊型肝炎的发病模式已经发生了很大的变化,发病率大大降低。COVID-19对策对戊型肝炎发病模式的影响是暂时的。预计戊型肝炎的发病率将逐渐恢复到COVID-19之前的模式。
    BACKGROUND: There are abundant studies on COVID-19 but few on its impact on hepatitis E. We aimed to assess the effect of the COVID-19 countermeasures on the pattern of hepatitis E incidence and explore the application of time series models in analyzing this pattern.
    METHODS: Our pivotal idea was to fit a pre-COVID-19 model with data from before the COVID-19 outbreak and use the deviation between forecast values and actual values to reflect the effect of COVID-19 countermeasures. We analyzed the pattern of hepatitis E incidence in China from 2013 to 2018. We evaluated the fitting and forecasting capability of 3 methods before the COVID-19 outbreak. Furthermore, we employed these methods to construct pre-COVID-19 incidence models and compare post-COVID-19 forecasts with reality.
    RESULTS: Before the COVID-19 outbreak, the Chinese hepatitis E incidence pattern was overall stationary and seasonal, with a peak in March, a trough in October, and higher levels in winter and spring than in summer and autumn, annually. Nevertheless, post-COVID-19 forecasts from pre-COVID-19 models were extremely different from reality in sectional periods but congruous in others.
    CONCLUSIONS: Since the COVID-19 pandemic, the Chinese hepatitis E incidence pattern has altered substantially, and the incidence has greatly decreased. The effect of the COVID-19 countermeasures on the pattern of hepatitis E incidence was temporary. The incidence of hepatitis E was anticipated to gradually revert to its pre-COVID-19 pattern.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:为了评估绩效,时间效率,与使用CBCT扫描创建虚拟患者的半自动方法相比,基于卷积神经网络(CNN)的用于集成分割颅颌面结构的自动方法的一致性。
    方法:选择30次CBCT扫描。六个颅颌面结构,包括颌面部复杂的骨骼,上颌窦,牙列,下颌骨,下颌管,和咽气道空间,在这些扫描上使用半自动和复合的先前验证的基于CNN的自动分割技术对单个结构进行分割。对自动分割的定性评估表明,需要进行细微的改进,已手动更正。这些精细分割用作比较半自动和自动集成分割的参考。
    结果:自动化方法的大多数小调整涉及鼻窦粘膜增厚的分割不足和颌面复合体内骨厚度减少的区域。自动化和半自动化方法平均需要1.1分钟和48.4分钟的时间,分别。与半自动方法(88.3%)相比,自动方法与参考方法的相似度(99.6%)更高。使用自动化方法的所有指标的标准偏差值都很低,表明高度的一致性。
    结论:CNN驱动的集成分割方法被证明是准确的,省时,并且通过同时分割颅颌面结构来创建CBCT衍生的虚拟患者。
    结论:使用自动化方法创建虚拟口腔患者可能会改变个性化的数字工作流程。这种进步对于各种牙科和颌面专业的治疗计划特别有益。
    To assess the performance, time-efficiency, and consistency of a convolutional neural network (CNN) based automated approach for integrated segmentation of craniomaxillofacial structures compared with semi-automated method for creating a virtual patient using cone beam computed tomography (CBCT) scans.
    Thirty CBCT scans were selected. Six craniomaxillofacial structures, encompassing the maxillofacial complex bones, maxillary sinus, dentition, mandible, mandibular canal, and pharyngeal airway space, were segmented on these scans using semi-automated and composite of previously validated CNN-based automated segmentation techniques for individual structures. A qualitative assessment of the automated segmentation revealed the need for minor refinements, which were manually corrected. These refined segmentations served as a reference for comparing semi-automated and automated integrated segmentations.
    The majority of minor adjustments with the automated approach involved under-segmentation of sinus mucosal thickening and regions with reduced bone thickness within the maxillofacial complex. The automated and the semi-automated approaches required an average time of 1.1 min and 48.4 min, respectively. The automated method demonstrated a greater degree of similarity (99.6 %) to the reference than the semi-automated approach (88.3 %). The standard deviation values for all metrics with the automated approach were low, indicating a high consistency.
    The CNN-driven integrated segmentation approach proved to be accurate, time-efficient, and consistent for creating a CBCT-derived virtual patient through simultaneous segmentation of craniomaxillofacial structures.
    The creation of a virtual orofacial patient using an automated approach could potentially transform personalized digital workflows. This advancement could be particularly beneficial for treatment planning in a variety of dental and maxillofacial specialties.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    背景:帕金森病(PD)的诊断和对其症状的评估需要亲自进行临床检查。需要对PD症状进行远程评估,特别是在2019年冠状病毒大流行等大流行期间。远程评估PD运动障碍的一种潜在方法是基于视频的分析。在这项研究中,我们旨在评估使用卷积神经网络(CNN)模型从步态视频中预测统一帕金森病评定量表(UPDRS)评分的可行性.
    方法:我们回顾性地获得了74例PD患者的737个连续步态视频和相应的神经科医生评定的UPDRS评分。我们利用CNN模型来预测总UPDRS第三部分得分和轴性症状的四个子得分(项目27、28、29和30),运动迟缓(项目23、24、25、26和31),刚度(项目22)和震颤(项目20和21)。我们在80%的步态视频上训练了模型,并使用10%的视频作为验证数据集。我们通过比较模型预测得分与神经科医生对其余10%视频(测试数据集)的评分来评估训练模型的预测性能。我们计算了这些分数之间的确定系数(R2),以评估模型的拟合优度。
    结果:在测试数据集中,模型预测值和神经科医生额定值之间的R2值,用于总的UPDRS第三部分评分和轴性症状的子评分,运动迟缓,刚性,震颤分别为0.59、0.77、0.56、0.46和0.0。对于症状严重的患者的视频,性能相对较低。
    结论:尽管该模型对总UPDRS第三部分得分的预测性能较低,它在预测轴性症状的子评分方面表现出相对较高的性能。该模型大致预测了中度症状患者的总UPDRS第三部分评分,但由于数据有限,严重症状患者的表现较低.需要更大的数据集来提高模型在临床环境中的性能。
    BACKGROUND: The diagnosis of Parkinson\'s disease (PD) and evaluation of its symptoms require in-person clinical examination. Remote evaluation of PD symptoms is desirable, especially during a pandemic such as the coronavirus disease 2019 pandemic. One potential method to remotely evaluate PD motor impairments is video-based analysis. In this study, we aimed to assess the feasibility of predicting the Unified Parkinson\'s Disease Rating Scale (UPDRS) score from gait videos using a convolutional neural network (CNN) model.
    METHODS: We retrospectively obtained 737 consecutive gait videos of 74 patients with PD and their corresponding neurologist-rated UPDRS scores. We utilized a CNN model for predicting the total UPDRS part III score and four subscores of axial symptoms (items 27, 28, 29, and 30), bradykinesia (items 23, 24, 25, 26, and 31), rigidity (item 22) and tremor (items 20 and 21). We trained the model on 80% of the gait videos and used 10% of the videos as a validation dataset. We evaluated the predictive performance of the trained model by comparing the model-predicted score with the neurologist-rated score for the remaining 10% of videos (test dataset). We calculated the coefficient of determination (R2) between those scores to evaluate the model\'s goodness of fit.
    RESULTS: In the test dataset, the R2 values between the model-predicted and neurologist-rated values for the total UPDRS part III score and subscores of axial symptoms, bradykinesia, rigidity, and tremor were 0.59, 0.77, 0.56, 0.46, and 0.0, respectively. The performance was relatively low for videos from patients with severe symptoms.
    CONCLUSIONS: Despite the low predictive performance of the model for the total UPDRS part III score, it demonstrated relatively high performance in predicting subscores of axial symptoms. The model approximately predicted the total UPDRS part III scores of patients with moderate symptoms, but the performance was low for patients with severe symptoms owing to limited data. A larger dataset is needed to improve the model\'s performance in clinical settings.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Observational Study
    背景:提高估计胎儿体重(EFW)计算的准确性有助于产科医生的决策并减少围产期并发症。本研究旨在开发基于产科电子健康档案的EFW深度神经网络(DNN)模型。
    方法:本研究回顾性分析了2016年1月至2018年12月国际和平妇幼保健院产科活产孕妇的电子健康记录。使用Hadlock公式和多元线性回归对DNN模型进行评估。
    结果:共分析了49896名孕妇中的34824例活产(23922例初产妇)。DNN模型的均方根误差为189.64g(95%CI187.95g-191.16g),平均绝对百分比误差为5.79%(95CI:5.70%-5.81%),与Hadlock的配方(240.36g和6.46%,分别)。通过结合以前未报告的因素,例如先前怀孕的出生体重,仅基于10个参数建立了简洁有效的DNN模型。新模型的准确率从76.08%提高到83.87%,均方根误差仅为243.80g。
    结论:提出的用于EFW计算的DNN模型比以前的方法更准确,可用于更好地做出与胎儿监护相关的决策。
    BACKGROUND: Improving the accuracy of estimated fetal weight (EFW) calculation can contribute to decision-making for obstetricians and decrease perinatal complications. This study aimed to develop a deep neural network (DNN) model for EFW based on obstetric electronic health records.
    METHODS: This study retrospectively analyzed the electronic health records of pregnant women with live births delivery at the obstetrics department of International Peace Maternity & Child Health Hospital between January 2016 and December 2018. The DNN model was evaluated using Hadlock\'s formula and multiple linear regression.
    RESULTS: A total of 34824 live births (23922 primiparas) from 49896 pregnant women were analyzed. The root-mean-square error of DNN model was 189.64 g (95% CI 187.95 g-191.16 g), and the mean absolute percentage error was 5.79% (95%CI: 5.70%-5.81%), significantly lower compared to Hadlock\'s formula (240.36 g and 6.46%, respectively). By combining with previously unreported factors, such as birth weight of prior pregnancies, a concise and effective DNN model was built based on only 10 parameters. Accuracy rate of a new model increased from 76.08% to 83.87%, with root-mean-square error of only 243.80 g.
    CONCLUSIONS: Proposed DNN model for EFW calculation is more accurate than previous approaches in this area and be adopted for better decision making related to fetal monitoring.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    目的:训练和验证基于云的卷积神经网络(CNN)模型,用于在锥形束计算机断层扫描(CBCT)图像上自动分割(AS)牙种植体和附着的假体牙冠。
    方法:280个上下颌颌骨CBCT扫描的总数据集是从接受或不接受冠状修复的植入物放置的患者获得的。将数据集随机分为三个子集:训练集(n=225),验证集(n=25)和测试集(n=30)。使用基于专家的半自动分割(SS)的植入物和附着的假体牙冠作为基本事实,开发并训练了CNN模型。通过与SS和手动校正的自动分割(称为精细自动分割(R-AS))进行比较来评估AS的性能。评估指标包括时间,基于混淆矩阵和3D表面差异的逐体素比较。
    结果:AS所需的平均时间比SS方法快60倍(<30s)。CNN模型在分割有和没有冠状修复的牙种植体方面都非常有效,获得高骰子相似系数得分分别为0.92±0.02和0.91±0.03。此外,也发现均方根偏差值很低(仅植入物:0.08±0.09mm,种植体+修复:0.11±0.07mm)与R-AS相比,意味着较高的AI分割精度。
    结论:提出的基于云的深度学习工具证明了CBCT图像上植入物的高性能和时效性分割。
    结论:基于AI的植入物和假体牙冠分割可以最大程度地减少伪影的负面影响,并增强创建牙科虚拟模型的通用性。此外,将建议的工具整合到现有的专门用于分割解剖结构的CNN模型中,可以改善植入物的术前计划和植入物周围骨水平的术后评估.
    To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images.
    A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences.
    The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy.
    The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images.
    AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri‑implant bone levels.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    本研究旨在评估在GoogleCloud平台上使用自动机器学习进行植入物系统分类的准确性和临床可用性。
    选择了四种牙科种植系统:OsstemTSIII,OsstemUSII,Biomet3iOs-黑体外部,还有DentsplySironaXive.总共收集了4,800张根尖周X射线照片(每个植入物系统为1,200张),并根据电子病历进行了标记。感兴趣的区域被手动裁剪为400×800像素,所有图像都上传到谷歌云存储。大约80%的图像用于训练,10%用于验证,10%用于测试。Google自动机器学习(AutoML)Vision自动执行神经架构搜索技术,将适当的算法应用于上传的数据。使用AutoML训练单标签图像分类模型。在准确性方面评估了mod-el的性能,精度,召回,特异性,F1得分。
    准确性,精度,召回,特异性,AutoMLVision模型的F1评分分别为0.981、0.963、0.961、0.985和0.962。OssemTSIII的准确度为100%。OsstemUSII和3iOsseotiteExternal在混淆矩阵中最常被混淆。
    云平台上基于深度学习的AutoML在将牙种植体系统分类为微调卷积神经网络方面显示出很高的准确性。将需要来自各种植入系统的高质量图像,以提高模型的性能和临床可用性。
    BACKGROUND: This study aimed to evaluate the accuracy and clinical usability of implant system classification using automated machine learning on a Google Cloud platform.
    METHODS: Four dental implant systems were selected: Osstem TSIII, Osstem USII, Biomet 3i Os-seotite External, and Dentsply Sirona Xive. A total of 4,800 periapical radiographs (1,200 for each implant system) were collected and labeled based on electronic medical records. Regions of interest were manually cropped to 400×800 pixels, and all images were uploaded to Google Cloud storage. Approximately 80% of the images were used for training, 10% for validation, and 10% for testing. Google automated machine learning (AutoML) Vision automatically executed a neural architecture search technology to apply an appropriate algorithm to the uploaded data. A single-label image classification model was trained using AutoML. The performance of the mod-el was evaluated in terms of accuracy, precision, recall, specificity, and F1 score.
    RESULTS: The accuracy, precision, recall, specificity, and F1 score of the AutoML Vision model were 0.981, 0.963, 0.961, 0.985, and 0.962, respectively. Osstem TSIII had an accuracy of 100%. Osstem USII and 3i Osseotite External were most often confused in the confusion matrix.
    CONCLUSIONS: Deep learning-based AutoML on a cloud platform showed high accuracy in the classification of dental implant systems as a fine-tuned convolutional neural network. Higher-quality images from various implant systems will be required to improve the performance and clinical usability of the model.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Systematic Review
    神经网络(NN)的一个潜在应用是口腔癌的早期检测。本系统综述旨在确定神经网络对口腔癌检测的敏感性和特异性的证据水平。遵循系统审查和元分析(PRISMA)的首选报告项目和Cochrane指南。文献来源包括PubMed,临床试验,Scopus,谷歌学者,和WebofScience。此外,使用诊断准确性研究质量评估2(QUADAS-2)工具评估偏倚风险和研究质量.只有9项研究完全符合资格标准。在大多数研究中,神经网络显示准确度大于85%,尽管100%的研究都有很高的偏倚风险,33%的人表现出高度的适用性担忧。尽管如此,纳入的研究表明,NNs可用于口腔癌的检测.然而,高质量的研究,有了适当的方法,低风险的偏差和适用性问题是必需的,这样可以得出更可靠的结论。
    One potential application of neural networks (NNs) is the early-stage detection of oral cancer. This systematic review aimed to determine the level of evidence on the sensitivity and specificity of NNs for the detection of oral cancer, following the Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) and Cochrane guidelines. Literature sources included PubMed, ClinicalTrials, Scopus, Google Scholar, and Web of Science. In addition, the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool was used to assess the risk of bias and the quality of the studies. Only 9 studies fully met the eligibility criteria. In most studies, NNs showed accuracy greater than 85%, though 100% of the studies presented a high risk of bias, and 33% showed high applicability concerns. Nonetheless, the included studies demonstrated that NNs were useful in the detection of oral cancer. However, studies of higher quality, with an adequate methodology, a low risk of bias and no applicability concerns are required so that more robust conclusions could be reached.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

    求助全文

  • 文章类型: Journal Article
    目的:手术切除并完全切除肿瘤(R0)为肝内胆管癌(iCCA)患者提供了长期生存的最佳机会。一种非侵入性成像技术,这可以提供手术切除边缘的快速术中评估,作为组织学检查的辅助手段,是光学相干断层扫描(OCT)。在这项研究中,我们研究了OCT与卷积神经网络(CNN)结合的能力,体外区分iCCA和正常肝实质。
    方法:本研究包括2020年6月至2021年4月接受iCCA选择性肝切除的连续成年患者(n=11)。离体扫描切除标本的感兴趣区域,在福尔马林固定之前,使用1310nm波长的台式OCT装置。对扫描区域进行标记并进行组织学检查,为每次扫描提供诊断。XceptionCNN接受了训练,已验证,并在将OCT扫描与相应的组织学诊断相匹配时进行了测试,通过5×5分层交叉验证过程。
    结果:24次三维扫描(对应于大约来自10名患者的85,603名)被纳入分析。在5×5交叉验证中,该模型获得了平均F1分数,灵敏度,特异性分别为0.94、0.94和0.93。
    结论:光学相干断层扫描联合CNN可以在体外区分iCCA和肝实质。需要进一步的研究来扩展这些结果,并导致创新的体内OCT应用,如术中或内窥镜扫描。
    OBJECTIVE: Surgical resection with complete tumor excision (R0) provides the best chance of long-term survival for patients with intrahepatic cholangiocarcinoma (iCCA). A non-invasive imaging technology, which could provide quick intraoperative assessment of resection margins, as an adjunct to histological examination, is optical coherence tomography (OCT). In this study, we investigated the ability of OCT combined with convolutional neural networks (CNN), to differentiate iCCA from normal liver parenchyma ex vivo.
    METHODS: Consecutive adult patients undergoing elective liver resections for iCCA between June 2020 and April 2021 (n = 11) were included in this study. Areas of interest from resection specimens were scanned ex vivo, before formalin fixation, using a table-top OCT device at 1310 nm wavelength. Scanned areas were marked and histologically examined, providing a diagnosis for each scan. An Xception CNN was trained, validated, and tested in matching OCT scans to their corresponding histological diagnoses, through a 5 × 5 stratified cross-validation process.
    RESULTS: Twenty-four three-dimensional scans (corresponding to approx. 85,603 individual) from ten patients were included in the analysis. In 5 × 5 cross-validation, the model achieved a mean F1-score, sensitivity, and specificity of 0.94, 0.94, and 0.93, respectively.
    CONCLUSIONS: Optical coherence tomography combined with CNN can differentiate iCCA from liver parenchyma ex vivo. Further studies are necessary to expand on these results and lead to innovative in vivo OCT applications, such as intraoperative or endoscopic scanning.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    未经证实:目前缺乏识别粘液性癌起源部位的工具。这项研究旨在评估基于转录组的分类器用于识别粘液性癌起源位点的性能。
    未经证实:1878例非粘液性癌标本和82例粘液性癌标本的转录组数据,有7个产地,即,子宫颈(CESC),冒号(COAD),胰腺(PAAD),胃(STAD),子宫内膜(UCEC),子宫癌肉瘤(UCS),和卵巢(OV),从癌症基因组图谱中获得,被用作训练集和验证集,分别。来自组织档案的14个粘液性癌标本的转录组数据用作测试集。为了确定原产地,选择每个起源位点的一组100个差异表达基因。删除同一基因的多次迭代后,选择了427个基因,以及它们的RNA表达谱,在每个产地,用于训练深度神经网络分类器。使用训练来估计分类器的性能,验证,和测试集。
    UNASSIGNED:训练集中模型的准确性为0.998,而验证集中模型的准确性为0.939(77/82)。在从组织档案中新测序的测试集中,模型的准确度为0.857(12/14)。t-SNE分析显示,测试集中的样本是训练集获得的聚类的一部分。
    UNASSIGNED:尽管受样本量小的限制,我们表明,基于转录组的分类器可以正确识别粘液性癌的起源部位。
    UNASSIGNED: There is a lack of tools for identifying the site of origin in mucinous cancer. This study aimed to evaluate the performance of a transcriptome-based classifier for identifying the site of origin in mucinous cancer.
    UNASSIGNED: Transcriptomic data of 1878 non-mucinous and 82 mucinous cancer specimens, with 7 sites of origin, namely, the uterine cervix (CESC), colon (COAD), pancreas (PAAD), stomach (STAD), uterine endometrium (UCEC), uterine carcinosarcoma (UCS), and ovary (OV), obtained from The Cancer Genome Atlas, were used as the training and validation sets, respectively. Transcriptomic data of 14 mucinous cancer specimens from a tissue archive were used as the test set. For identifying the site of origin, a set of 100 differentially expressed genes for each site of origin was selected. After removing multiple iterations of the same gene, 427 genes were chosen, and their RNA expression profiles, at each site of origin, were used to train the deep neural network classifier. The performance of the classifier was estimated using the training, validation, and test sets.
    UNASSIGNED: The accuracy of the model in the training set was 0.998, while that in the validation set was 0.939 (77/82). In the test set which is newly sequenced from a tissue archive, the model showed an accuracy of 0.857 (12/14). t-SNE analysis revealed that samples in the test set were part of the clusters obtained for the training set.
    UNASSIGNED: Although limited by small sample size, we showed that a transcriptome-based classifier could correctly identify the site of origin of mucinous cancer.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

  • 文章类型: Journal Article
    人工智能(AI)近年来,包括机器学习和深度学习在内的内容已被引入护理领域。本研究回顾了以下主题:人工智能的概念,机器学习,和深度学习;基于人工智能的护理研究的例子;护理学校人工智能教育的必要性;以及人工智能有用的护理领域。人工智能指的是一个不是由人类组成的智能系统,而是一台机器.机器学习是指计算机在没有明确编程的情况下进行学习的能力。深度学习是机器学习的一个子集,它使用由多个隐藏层组成的人工神经网络。建议教育课程应包含大数据,AI的概念,机器学习的算法和模型,深度学习的模式,和编码实践。标准课程应由护理学会组织。人工智能有用的护理领域的一个例子是基于孕妇护理记录的产前护理干预措施,以及根据孕妇年龄的基于人工智能的分娩风险预测。护士应该能够应对受AI影响的快速发展的护理环境,并应了解如何在其领域中应用AI。现在是韩国护士采取措施在他们的研究中熟悉人工智能的时候了。教育,和实践。
    Artificial intelligence (AI), which includes machine learning and deep learning has been introduced to nursing care in recent years. The present study reviews the following topics: the concepts of AI, machine learning, and deep learning; examples of AI-based nursing research; the necessity of education on AI in nursing schools; and the areas of nursing care where AI is useful. AI refers to an intelligent system consisting not of a human, but a machine. Machine learning refers to computers\' ability to learn without being explicitly programmed. Deep learning is a subset of machine learning that uses artificial neural networks consisting of multiple hidden layers. It is suggested that the educational curriculum should include big data, the concept of AI, algorithms and models of machine learning, the model of deep learning, and coding practice. The standard curriculum should be organized by the nursing society. An example of an area of nursing care where AI is useful is prenatal nursing interventions based on pregnant women\'s nursing records and AI-based prediction of the risk of delivery according to pregnant women\'s age. Nurses should be able to cope with the rapidly developing environment of nursing care influenced by AI and should understand how to apply AI in their field. It is time for Korean nurses to take steps to become familiar with AI in their research, education, and practice.
    导出

    更多引用

    收藏

    翻译标题摘要

    我要上传

       PDF(Pubmed)

公众号