Cardiomegaly is a radiographic abnormality, and it has significant prognosis importance in the population. Chest X-ray images can identify it. Early detection of cardiomegaly reduces the risk of congestive heart failure and systolic dysfunction. Due to the lack of radiologists, there is a demand for the artificial intelligence tool for the early detection of cardiomegaly. The cardiomegaly X-ray dataset is extracted from the cheXpert database. Totally, 46195 X-ray records with a different view such as AP view, PA views, and lateral views are used to train and validate the proposed model. The artificial intelligence app named CardioXpert is constructed based on deep neural network. The transfer learning approach is adopted to increase the prediction metrics, and an optimized training method called adaptive movement estimation is used. Three different transfer learning-based deep neural networks named APNET, PANET, and LateralNET are constructed for each view of X-ray images. Finally, certainty-based fusion is performed to enrich the prediction accuracy, and it is named CardioXpert. As the proposed method is based on the largest cardiomegaly dataset, hold-out validation is performed to verify the prediction accuracy of the proposed model. An unseen dataset validates the model. These deep neural networks, APNET, PANET, and LateralNET, are individually validated, and then the fused network CardioXpert is validated. The proposed model CardioXpert provides an accuracy of 93.6%, which is the highest at this time for this dataset. It also yields the highest sensitivity of 94.7% and a precision of 97.7%. These prediction metrics prove that the proposed model outperforms all the state-of-the-art deep transfer learning methods for diagnosing cardiomegaly thoracic disorder. The proposed deep learning neural network model is deployed as the web app. The cardiologist can use this prognostic app to predict cardiomegaly disease faster and more robust in the early state by using low-cost and chest X-ray images.
neural networks a classroom approach by satish kumar pdf free 208
Download File: https://quagicafu.blogspot.com/?file=2vDOxU
The five-year survival rate for pancreatic cancer (PC) is the lowest of any cancer kind, and it is the fourth greatest cause of cancer-related death, with a growing death rate. When it comes to cancer invasion, the most significant risk factors are: smoking; alcohol usage; diabetes; and prior pancreatitis. By using this method, we will be able to detect our PC, which is equipped with picture handling technology. Researchers used CT images as input in this study and preprocessed them to remove any noise in the images that had been learned using an adaptive Weiner filter. Preprocessing is followed by the use of a region grow ideal to segment the noise-free image. Scale Invariant Feature Transform (SIFT) is utilized once more to extract the tumor limits and principal component analysis (PCA) is used to enhance the retrieved structures to improve the types of pancreatic CT images. In order to activate the picture parameters, a convolutional neural network (CNN) classifier is used. In order to categorize an image as nonpancreatic cancer or pancreatic cancer, the test data were compared to the training data and the classified image was compared. MATLAB then initiates the entire process, and the most recent performance estimation approach is utilized, resulting in outstanding accuracy.
Mango is an imperative commercial fruit in terms of market value and volume of production. In addition, it is grown in more than ninety nations around the globe. Consequently, the demand for effective grading and sorting has increased, ever since. This communication describes a non-invasive mango fruit grading and sorting model that utilizes hybrid soft computing approach. Artificial neural networks (ANN), optimized with Antlion optimizer (ALO), are used as a classification tool. The quality of mangoes is evaluated according to four grading parameters: size (volume and morphology), maturity (ripe/unripe), defect (defective/healthy) and variety (cultivar). Besides, a comparison of proposed grading system with state-of-the-art models is performed. The system showed an overall classification rate of 95.8% and outperformed the other models. Results demonstrate the effectiveness of proposed model in fruit grading and sorting applications.
Medical image fusion plays a significant role in medical diagnosis applications. Although the conventional approaches have produced moderate visual analysis, still there is a scope to improve the performance parameters and reduce the computational complexity. Thus, this article implemented the hybrid fusion method by using the novel implementation of joint slope analysis (JSA), probabilistic parametric steered image filtration (PPSIF), and deep learning convolutional neural networks (DLCNNs)-based SR Fusion Net. Here, JSA decomposes the images to estimate edge-based slopes and develops the edge-preserving approximate layers from the multi-modal medical images. Further, PPSIF is used to generate the feature fusion with base layer-based weight maps. Then, the SR Fusion Net is used to generate the spatial and texture feature-based weight maps. Finally, optimal fusion rule is applied on the detail layers generated from the base layer and approximate layer, which resulted in the fused outcome. The proposed method is capable of performing the fusion operation between various modalities of images, such as MRI-CT, MRI-PET, and MRI-SPECT combinations by using two different architectures. The simulation results show that the proposed method resulted in better subjective and objective performance as compared to state of art approaches.
2ff7e9595c
Comments