Figures
Abstract
Pancreatic cancer is one of the most adverse diseases and it is very difficult to treat because the cancer cells formed in the pancreas intertwine themselves with nearby blood vessels and connective tissue. Hence, the surgical procedure of treatment becomes complicated and it does not always lead to a cure. Histopathological diagnosis is the usual approach for cancer diagnosis. However, the pancreas remains so deep inside the body that experts sometimes struggle to detect cancer in it. Computer-aided diagnosis can come to the aid of pathologists in this scenario. It assists experts by supporting their diagnostic decisions. In this research, we carried out a deep learning-based approach to analyze histopathology images. We collected whole-slide images of KPC mice to implement this work. The pancreatic abnormalities observed in KPC mice develop similar histological features to human beings. We created random patches from whole-slide images. Then, a convolutional autoencoder framework was used to embed these patches into an integrated latent space. We applied ‘information maximization’, a deep learning clustering technique to cluster the identical patches in an unsupervised manner since our dataset does not have annotation. Moreover, Uniform manifold approximation and projection, a nonlinear dimension reduction technique was utilized to visualize the embedded patches in a 2-dimensional space. Finally, we calculated a few internal cluster validation metrics to determine the optimal cluster set. Our work concentrated on patch-based anomaly detection in the whole slide histopathology images of KPC mice.
Author summary
Interdisciplinary research is quite advantageous in addressing real-world problems as it brings multiple disciplines into action. A multi-disciplinary approach allows researchers from different backgrounds to work together. It helps increase productivity and understand new ideas. The last decade has witnessed rapid development in digital pathology and opened the path for dealing with pathology differently. As a result, pathological diagnosis is now being explored through the ever-expanding field of artificial intelligence. Deep learning, a subset of artificial intelligence is extremely effective in performing a pixel-based distinction among histopathological structures and thereby, assisting pathologists in their daily practice. In this interdisciplinary research, we analyzed medical data utilizing deep learning algorithms and some statistical tools as well. Statistics helped us in interpreting experimental results. This type of interdisciplinary research will pave the way for better diagnosis of health diseases and in turn, reduce the time and cost of treatment. We verified our experimental results with the help of a pathologist and it was confirmed that our work has significance.
Citation: Rumman MI, Ono N, Ohuchida K, Altaf-Ul-Amin M, Huang M, Kanaya S (2023) Information maximization-based clustering of histopathology images using deep learning. PLOS Digit Health 2(12): e0000391. https://doi.org/10.1371/journal.pdig.0000391
Editor: Yuan Lai, Tsinghua University, CHINA
Received: April 18, 2023; Accepted: October 16, 2023; Published: December 8, 2023
Copyright: © 2023 Rumman et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The code for this work can be found at this address: https://github.com/randomaccess2023/KPC_IM_128_64. The datasets (128×128 and 64×64 pixels) can be located here: https://doi.org/10.6084/m9.figshare.24129360. The dataset obtained from sequential patch creation can be accessed from this link: https://doi.org/10.6084/m9.figshare.24137553.
Funding: This work was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant-in-Aid for Scientific Research (C): Grant Number 21K12111. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Research background
Histopathology deals with the examination of microscopic slides incorporating tissues, cells, etc. for the diagnosis of various forms of abnormalities. Histopathologists are expert medical practitioners who inspect cells or tissues under the microscope and make a diagnosis. This is usually the benchmark procedure of histopathological diagnosis. However, it is also a very laborious and tiresome task since these physicists carry out the diagnosis of a huge number of samples every day. The complexity involved in histopathological tissue diagnosis often increases the workload of pathology specialists and it can lead to the reduction of both accuracy and efficiency [1].
Pancreatic cancer (PC) is a deadly disease and it is categorized as one of the most invasive malignancies [2]. According to the data provided by GLOBOCAN 2018, 458918 new cases of PC were registered worldwide which alone contributed to 2.5% of all new cancer diagnoses in that year [3]. PC is a disease in which cancer cells are found in the tissues of the pancreas. Majority of the pancreas-related cancers are of exocrine types that begin in the cells that line the ducts. Pancreatic neuroendocrine tumors are less common compared to exocrine pancreas cancer; however, once eventuated, the smooth function of the healthy cells of the pancreas starts hindering, and it can further result in unimpeded cell division [4]. Pancreatic ductal adenocarcinoma (PDAC) is the 7th leading cause of global cancer which is also responsible for over 90% of pancreatic defects [5].
Despite being one of the least common types of cancer, the survival outcomes of PDAC are extremely low. The reason is that people affected with PC show almost nothing to very few symptoms during the early stages. PDAC has a very dismal prophecy as the survival rate is only 24% after diagnosis, while only 9% live for 5 years [6]. The mortality of pancreas cancer increases with age and is marginally more common in men than women [7]. It is also estimated that 355317 new cases will occur by 2040 all over the globe [7]. It is further approximated that PC will escalate by 18.6 per 100000 people in 2050 around the world [8]. Inter and intra-tumor heterogeneity within cancer cells ascertain the fact that we need satisfactory tools to discover pathological tissue structures as preferred [9].
Computer-aided diagnosis
Computer-aided diagnosis (CAD) is a system that assists doctors in the explication of medical images. Medical professionals are responsible for analyzing a significant amount of information retrieved from various diagnosis technologies (e.g., X-ray, MRI, CT). CAD systems process digital images for distinctive appearances and focus on prominent features (such as the location of disease) to support a decision taken by expert pathologists. CAD systems help pathologists enhance their diagnostic interpretation by reducing inter-pathologist variations during the diagnostic process [10]. The main goals of CAD include lessening the misdiagnosis rates, saving time and cost concerning medical examinations and reducing the rate of biopsies [11].
In the earlier days, modern computer researchers in various fields explored the possibility of the first CAD systems that utilized flow charts, statistical pattern-matching, probability theory, or knowledge based decision-making processes [12]. However, the attention shifted to data mining approaches later due to various algorithmic restrictions. In recent times, CAD based on deep learning has shown promise in classifying histological structures with high accuracy as artificial intelligence (AI) algorithms can predict malignant growths that can further persuade the therapy process [13]. Medical imaging for diagnosis involving AI is a rapidly growing area of research because of the multidisciplinary nature of machine learning algorithms. CAD is a technology that includes multiple elements like AI, computer vision and medical image processing [14].
Digital medical imaging such as computed tomography (CT), magnetic resonance imaging (MRI) and endoscopic ultrasound can play a vital role in cancer diagnosis and treatment planning [15]. The development and implementation of CAD entail the application of computer technology in medical image elucidation [16]. CAD analysis provides a helping hand to radiologists in characterizing lesions [16]. Computer technology can discover abnormalities in medical images of various modalities that can guarantee early diagnosis to a greater strength [17]. With CAD, radiologists use the computer output as a ‘second opinion’ and it is established on taking the contributions of physicians and computers equally into account [17].
Deep learning in medical imaging
Deep learning is a subset of machine learning (which is a subset of AI) that uses vast volumes of data and complex algorithms to train a model. Deep learning utilizes neural networks for analyzing data and predicting outcomes. It has found its application in almost every sector such as healthcare, recommendation systems, news customization, image manipulation, music generation, robotics, natural language processing (NLP), object detection, self-driving cars, and so on.
Deep learning has enormous applications in the healthcare sector. It is widely used for drug discovery, cancer detection, and medical research. Deep learning may be used for the automation of various time consuming tasks performed by radiologists such as lesion detection, segmentation, and classification. Appropriate imaging-based classification of numerous diseases is one of the key tasks for radiologists. Deep neural networks (DNNs), especially convolutional neural networks (CNNs) have shown incredible performance in recent years in diagnosing this type of malignancy [18]. CNN is a deep learning algorithm that can process digital images in the form of grid patterns. In [19], CNN has been successfully used for detecting bacterial and viral pneumonia from X-ray images. Another important exercise in medical imaging is biomedical image segmentation. U-Net is a structure that can perform image segmentation [20]. It is one type of convolutional neural network as its architecture can be broadly thought of as an encoder network followed by a decoder network. Mask R-CNN is another neural network framework and state-of-the-art architecture. This variant of DNN can detect objects in a brain MRI image and generate a high quality segmentation mask for each instance. Mask R-CNN extends faster R-CNN by adding a branch for predicting segmentation masks on each ROI (region of interest) [21]. Many problems in computer vision and image processing involve “translating” an input image into a corresponding output image [22]. This task is known as image-to-image translation, where the goal is to learn the mapping between an input image and an output image. Image modality translation in magnetic resonance images has been performed by leveraging a deep learning-based conditional generative adversarial network (cGAN) [23].
Finally, we want to explain the significance of deep learning in the study of histopathology images. With the advent of advanced equipment such as specialized scanning machines, it has become quite easy to store histopathology microscopic glass slides as digital slides on a computer for processing them. Deep learning has become the methodology of choice when it comes to digital histopathology. Deep learning based algorithms including CNNs are distinct from traditional machine learning procedures because of their ability to learn complex representations to perform pattern recognition from raw data without any intervention from human beings [24]. Due to this ability to learn complex representations, the examination of different specimens can be performed accurately by leveraging CNN models.
Morphological variation in histopathology images not only refers to the microscopic appearance of the cancer cell population but also pertains to the growth pattern and stroma of the tumor [25]. For this reason, CNNs are considered superior to traditional multilayer perceptron because of having translational equivariance and translational invariance properties, which can handle inter-class consistency and intra-class variability of complex structured pathological images [26]. The deep learning methods proposed so far in the literature are traditionally based on CNNs, recurrent neural networks (RNNs), generative adversarial networks (GANs), and autoencoders (AEs) [27]. These models refer to different learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and so on. These types of learning strategies are important in the healthcare sector for anomaly detection, disease identification, or tumor segmentation. In our work, we focused on “unsupervised learning”. Unsupervised learning aims at identifying patterns without mapping an input into a predefined output (i.e., label) [27]. Two very popular and heavily used unsupervised learning frameworks are AE and GAN with their multiple variants. In [28], a unified GAN architecture has been implemented that carries out cell-level nuclei segmentation in an unsupervised manner utilizing the haematoxylin and eosin-stained bone marrow histopathology images. In [29], the authors leveraged pre-trained DeepLab V3+ architecture for extracting feature vectors and later applied an unsupervised clustering algorithm on those feature vectors for determining clinically relevant known and unknown features of the kidney at the patient level. Self-supervised learning (SSL) has been used in [30] to perform histopathological image analysis. SSL makes use of the structure within data to auto-generate the labels and it is different from unsupervised learning.
Objectives of this research
In this research, we leveraged deep learning methodology to distinguish different tissue features of the whole-slide images (WSIs) of KPC mice. A genetically engineered KPC mouse model develops disease progression that corresponds to human beings [31]. A ZEISS Axio Scan.Z1 digital slide scanner (manufactured by Carl Zeiss) was used to digitize the microscopic glass slides. 20× objective lens magnification was applied while converting glass slides into CZI image file format. The pixel resolution under 20× magnification was 0.22 μm per pixel. After that, 30% resizing was exerted to convert CZI files into TIFF format, resulting in a reduction of both image size and resolution. Considering this, the actual resolution will be 0.31 μm per pixel during patch creation from TIFF files. At first, we arbitrarily created small patches from WSIs. We used two different patch sizes: 128×128 and 64×64 pixels. The WSIs were collected from the department of surgery and oncology, graduate school of medical sciences of Kyushu University. We have 191 WSIs in total. Each WSI is stained with 5 distinct staining techniques, namely HE (Haematoxylin & Eosin), MT (Masson’s Trichrome), CD31 (Cluster of Differentiation 31), CK19 (Cytokeratin 19), and Ki67 (Marker of proliferation Ki67) for better visualization of different cell types. The patches have been created in such a way that each of them exhibits the same biological features with different staining. After creating the patches, we fed them to our convolutional autoencoder architecture that embedded these patches into an integrated upper latent space. Convolutional autoencoders are unsupervised deep learning models composed of convolutional layers and they are capable of creating compressed image representations [32]. The compressed representation of the embedded patches is an incorporation of all the staining techniques. Information maximization (IM) was implemented on the lower latent space to perform clustering in the interest of distinguishing different tissue features. IM enlarges the difference between marginal entropy and conditional entropy. It is a soft clustering technique because it assigns a probability to a data point belonging to a particular cluster. In the next section, we will first describe the dataset that will be followed by staining techniques, model architecture, and IM. After that, we will explain the outcome of several internal cluster validation methods. Subsequently, we will describe the clustering results of the optimal cluster set confirmed by internal validation. Finally, Uniform manifold approximation and projection (UMAP) will be shown that visualizes the upper dimensional latent embeddings in a 2-dimensional characterization corresponding to the optimal cluster set.
Significance of histological staining.
Histological staining is a vital step in diagnosing various diseases as it can provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts [33]. It is a usual practice in histopathological diagnosis to use some common stains such as haematoxylin and eosin. Histological staining is used to highlight important features of the tissue as well as to differentiate structural elements of the tissue by their color or staining intensity. For example, HE stains cell nuclei, while MT visualizes connective tissue [34]. In immunochemistry, CD31 can be used to demonstrate angiomas and angiosarcomas, which are tumors and cancer cells usually found in the walls of blood vessels or lymphatic vessels. CK19 can detect tumor cells in lymph nodes, peripheral blood and bone marrow of breast cancer patients. Ki67 can be used in immunochemistry and it is strictly associated with cell proliferation.
Importance of unsupervised learning.
Unsupervised learning can identify patterns in data without annotation and group unsorted information by extracting useful features from them. Unsupervised learning is often focused on clustering as it is the process of grouping objects similar to each other and dissimilar to objects in other clusters. In histopathology study, unsupervised learning, therefore, can discover previously unknown information about tissue by categorizing different cell types in non-identical clusters. Besides, the annotation of medical image data is a very difficult and time-consuming task due to the unavailability of resources and the nature of the process. In this scenario, unsupervised learning turns out to be very useful. Moreover, it is convenient for dealing with quantitative assessment and interobserver variability in pathology.
Influence of this research.
This research is noteworthy for several reasons. One of them is the automatic separation of different cellular features in the form of patches created from whole-slide images. Our work can be the basis for developing models to handle annotation-free data. This work will show that good clustering can be achieved by considering multiple stainings together. That being the case, information maximization can successfully distinguish between embedded features of multi-stained images and cluster them nicely. It will provide the opportunity to carry out patch-based anomaly detection in the whole-slide images. As a result, the overall condition of the WSI can be observed very easily (e.g., which regions are affected and which are not etc.).
Materials and method
Dataset
In this work, we used pathological image patches of KPC mice for conducting our experiment. 191 whole-slide images (WSIs) were collected from the department of surgery and oncology, graduate school of medical sciences of Kyushu University. Each WSI was stained with 5 separate staining techniques. The WSIs were collected as (.tif) files. Each WSI has a dimension of 15000×20000 (Height×Width) pixels. We randomly created small patches from these 191 WSIs and created 15000 samples in total while selecting two different patch sizes: 128×128 and 64×64 pixels. Thus, we created two different datasets, each with 15000 patches but one with 128×128 and the other one with 64×64 pixels spatial size. Hence, the patches in the same positions of these two datasets are not expected to contain the exact biological features because of random creation. However, they correspond to different positions of the WSI that they belong to.
We can see from Fig 1 that the tissue features in these patches are different due to arbitrary creation from the WSIs.
Fig 1 shows a patch in a particular position from both datasets (128×128 and 64×64 pixels).
Staining techniques
We used 5 different staining techniques for this work: Haematoxylin & Eosin (HE), Masson’s Trichrome (MT), Cluster of Differentiation 31 (CD31), Cytokeratin 19 (CK19), and Marker of proliferation Ki67 (Ki67). HE is used for the demonstration of the nucleus and cytoplasmic inclusions; haematoxylin stains the cell nucleus purple and eosin stains the fibers pink. MT stains muscle and collagen fibers; it highlights the cell nucleus reddish violet as well as collagen fibers blue. CD31 exhibits the presence of endothelial cells in tissue sections. CK19 is a cell marker for cancer stem cells, which are a subpopulation within the tumor. Ki67 is used in oncology to estimate a tumor’s proliferation index.
The biological features in each WSI shown in Fig 2 are the same although the staining appears to be different to each other. This is true for the patches shown in Fig 1 as well, where each patch represents the same biological features despite looking different to each other.
Fig 2 represents a specified WSI with 5 staining techniques. We have also shown the 2 types of figurative patches (128×128 and 64×64 pixels) that have been created randomly from these WSIs.
Convolutional autoencoder architecture
There are essentially three sections in our network: an encoder, a classifier, and a decoder. These sections have been represented with rectangular boxes in Fig 3. Inputs were fed to the encoder portion of our autoencoder architecture. In the case of 128×128 pixels patches, we used rescaled input while training to keep the network depth the same and ensure faster training. We did not do this for 64×64 pixels patches. After the input was processed through the encoder, we obtained upper latent space, which portrays a compressed representation of the input samples. The features in the upper latent space were then passed through the classifier part of the network and reduced further to attain the lower latent space. The output of the classifier, i.e., the feature dimension of lower latent space varies according to the number of clusters of our choosing. Information maximization-based clustering was performed utilizing the lower dimensional latent features. Finally, the features in both the upper latent space and lower latent space were concatenated before being fed to the decoder. The decoder was then tasked with reconstructing the input from it.
We leveraged a convolutional autoencoder-based deep learning method for our research. Fig 3 shows a simplified representation of our convolutional autoencoder-based deep learning clustering model.
Information maximization
Unsupervised learning is a quite challenging task since it is very much subjective. Finding meaningful patterns from large datasets without annotation is extremely helpful for many applications. Performing unsupervised clustering is equivalent to building a classifier without using annotated samples. In recent times, some researchers have shown improved unsupervised clustering performance by leveraging deep learning. The goal of most of these techniques is to cluster the data points in a way that data with similar characteristics are assigned in the same cluster. Deep learning-based clustering techniques are different from traditional clustering techniques as they cluster the data points by finding complex patterns rather than using simple pre-defined metrics like Euclidean distance or Manhattan distance.
From Fig 4, we can see that there are 12 layers in the encoder, 7 layers in the classifier, and 16 layers in the decoder section. So, our convolutional autoencoder framework is a 35-layer network. The notation ($-#) used in this figure denotes block ($) and layer (#) numbers, respectively. In S1 Table of the “Supporting information”, we have shown how the tensor shape is changed between different sections of our model.
Fig 4 presents the encoder, classifier, and decoder sections of our convolutional autoencoder architecture with detailed layer information.
Regularized information maximization is an information theoretic approach to perform clustering which takes care of classifier complexity. This method uses a differentiable loss function with a training objective to maximize the mutual information between the model input and the model output while imposing some regularization penalty on the model parameters. Mutual information can be represented as the difference between marginal entropy and conditional entropy [35]. Hence, the training objective to minimize is
(1)
Here, = regularization penalty; I(X; Y) = mutual information between X and Y; λ = works as a trade-off. I(X; Y) can be further demonstrated as
(2)
where H(Y) = marginal entropy; H(Y|X) = conditional entropy. So, the purpose of this technique is to maximize the difference between H(Y) and H(Y|X), i.e., increase the mutual information.
By maximizing H(Y), diverse cluster assignments are ensured; hence, the model cannot degenerate by assigning all the data points to a single cluster [36]. By minimizing H(Y|X), the cluster assignment of a data point is accomplished with high confidence [36]. Conditional entropy H(Y|X) can be determined by the following equation
(3)
and marginal entropy H(Y) can be calculated using this equation
(4)
where N denotes the number of samples.
The entropy function for measuring the entropy of each probability distribution can be calculated by using the succeeding equation
(5)
The underlying mechanism of the information maximization technique in our work has been described with illustration in S2 Fig of the “Supporting information”. In [37], the DeepInfoMax method has been proposed. In this work, Hjelm et al. maximized the mutual information between the input and output of a deep neural network encoder to learn representations in an unsupervised manner. They did not perform any clustering but focused on training a GAN-like discriminator to differentiate between real and fake samples.
Loss function
Loss function, which is often regarded as the objective function determines how well a deep learning model performs on the training data. In this research, we defined the loss function LF as follows
(6)
Here, λME, λCE and λAF are working as trade-offs for getting desirable outcomes for marginal entropy H(Y), conditional entropy H(Y|X) and affine loss LAF, respectively; RL here is the reconstruction loss.
Reconstruction loss.
Autoencoders are unsupervised deep learning models in which we leverage neural networks for the task of representation learning. Specifically, this type of network architecture compresses the knowledge representation of the original input into a bottleneck utilizing a multi-layer encoder. This bottleneck is expected to contain meaningful representations of the given data point. When, this meaningful low-dimensional representation is fed to a decoder, it produces an output of the same size as the input. Hence, this type of network can be trained by minimizing the reconstruction error , where x and x′ are the original input and the consequent reconstruction, respectively.
For our work, we used mean squared error (MSE) as the reconstruction loss, which denotes the average of the square of errors [38]. It is computed as the mean of squared differences between the original input and the reconstructed input as shown below
(7)
where xorg,i = original input; xrecon,i = reconstructed input; n = number of samples.
Affine loss.
Affine transformation in deep learning is a transformation that modifies the geometric structure of the image but not the lengths and angles. In this work, we applied rotation, translation, and scaling as affine transformation. The details regarding affine transformation can be found in S3 Fig of the “Supporting information” with a demonstration.
Both the original and the transformed images are provided as input to the encoder section of our convolutional autoencoder architecture. KL (Kullback-Leibler) divergence is an elementary equation that quantifies the closeness of two probability distributions [39]. In our research, KL divergence was measured as the difference between lower dimensional latent features of the original images and transformed images using the following equation
(8)
The deviation between these two distributions is calculated as affine loss and added as a penalty term with the loss function.
Hyperparameters
Hyperparameters are the variables that determine the network structure of a deep learning model (e.g., number of hidden units) and also the variables that determine how the network is trained (e.g., learning rate) [40].
We listed the hyperparameters used for training the model in Table 1. These hyperparameters include the size of kernels, lengths of strides and so on. We trained our model for 4000 epochs with patch sizes of 128×128 pixels and 3000 epochs with patch sizes of 64×64 pixels. After these many epochs, the increment in mutual information (the difference between the marginal entropy and the conditional entropy) was insignificant. We chose ‘Adadelta’ as the optimizer due to its robustness against noisy gradient information, applicability in different model architectures, effectiveness in various data modalities, and efficacy with the choices of hyperparameters [41].
Results and discussion
Cluster validation
Cluster validation is used for evaluating the goodness of clustering algorithms. It can be categorized into three classes: internal cluster validation, external cluster validation, and relative cluster validation. In internal validation, only the clustered data is used to evaluate the goodness of clustering without reference to external information. It measures how closely related the objects are in a cluster [42]. In external validation, the result of a cluster analysis is compared to a pre-specified result [43]. Relative validation evaluates the clustering structure by varying parameter values for the same algorithm (e.g., varying the number of clusters in K-Means clustering). Relative criteria can compare two clustering structures and point out the better one in relative term [44].
In this research, we tried to validate our information maximization-based clustering results using 6 cluster validation methods: Xie-Beni index, Calinski-Harabasz index, C index, Hartigan index, Dunn index and Mclain-Rao index. All of these are internal cluster validation techniques and can be applied in our case since our dataset does not have any annotation.
Xie-Beni index focuses on compactness and separation [45]. A smaller Xie-Beni index stipulates a partition in which all the clusters are more compact and separate from each other [46]. Calinski-Harabasz index is a ratio of between-cluster scatter matrix and within-cluster scatter matrix [47]. A higher value of the Calinski-Harabasz index indicates better clustering. The C index is another internal validation index. The sum of all distances between pairs of observations in the same cluster over all clusters is calculated and the sum of the smallest distances between all pairs of points in the entire dataset is subtracted from it; this value is then divided by the subtracted value of the sum of the largest distances between all pairs of points in the entire dataset and the sum of the smallest distances between all pairs of points in the entire dataset [48]. A lower value of the C index denotes better clustering. The Hartigan index is based on the logarithmic relationship between the sum of squares within the cluster and the sum of squares between clusters [49]. The higher the Hartigan index, the higher the clustering quality is. Dunn index is the ratio of the smallest distance between data from different clusters and the largest distance between data in the same cluster [50]. For a given assignment of clusters, a higher Dunn index indicates better clustering (it ranges between 0 to 1). The Mclain-Rao index is defined as the quotient of mean within-cluster and between-cluster distances. A lower value of the Mclain-Rao index is desirable.
From Table 2, we can see that among these 6 metrics, three (Xie-Beni index, Calinski Harabasz index and C index) denoted 14 as the optimal cluster set while the Dunn index selected 13 and two metrics (Hartigan index and Mclain-Rao index) chose 15 as optimal for the case of 128×128. On the other hand, three metrics (Hartigan index, Dunn index and Mclain-Rao index) chose 14 as the best cluster set while the Xie-Beni index, Calinski-Harabasz index and C index selected 13, 11 and 17 as optimal, respectively for the case of 64×64. Therefore, out of these 12 instances demonstrated by the 6 metrics mentioned above, 6 instances have shown 14 as the optimal number of clusters for both patch sizes; 13 and 15 each have been shown by 2 instances; 11 and 17 each have been shown by 1 instance. For this reason, we have chosen 14 as the optimal number of clusters in both cases.
Fig 5 displays the plotting for different internal cluster validation indices for the selected number of clusters using both 128×128 and 64×64 pixels patches. The optimal number of clusters for each validation index can be easily detected from Fig 5.
Fig 5 shows the values of various internal validation indices for different clusters using both 128×128 and 64×64 pixels patches.
Clustering results
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than those in other groups (clusters).
A pathology specialist determined the names of the clusters obtained using 128×128 pixels patches. These names are provided below.
- Cluster 1: Cancer cells
- Cluster 2: Liver, Kidney
- Cluster 3: Blank
- Cluster 4: Intestinal mucosa, Spleen
- Cluster 5: Vascular endothelium
- Cluster 6: Tissue gap
- Cluster 7: Cancer cells with rich stroma
- Cluster 8: Liver peripheral
- Cluster 9: Liver center
- Cluster 10: Tissue stump
- Cluster 11: Cancer cells
- Cluster 12: Liver
- Cluster 13: Dense small blood vessels
- Cluster 14: Intestinal mucosa
We would like to mention that the pathologist determined the names of the clusters shown in Fig 6 by examining the colormap (which shows different regions with different colors) of a randomly selected WSI out of the 191 that we had available. We explained more about this in detail later. We can notice some types of cancer cells in three clusters (1, 7 and 11). Pancreatic cancer is well known to be heterogeneous in its molecular and morphological phenotype [51].
Fig 6 displays the clustering results of the 14-cluster set obtained using the 15000 samples of size 128×128 pixels.
Now, we will present the clustering results of the 14-cluster set attained with the 64×64 pixels patches. The number of samples is 15000 in this case as well which were created from 191 WSIs. The pathologist did not provide annotations for the clusters displayed in Fig 7, but we can notice a lot of similarities with 128×128 patches.
Fig 7 displays the clustering results of the 14-cluster set obtained using the 15000 samples of size 64×64 pixels.
Embedded features visualization
We carried out this inquisitive unsupervised learning task for cluster sets of 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, and 18 using data with two different patch sizes. These are the values of nc. After the completion of training, we obtained embedded features as encoded variables from the upper latent space and a probability distribution for each patch from the lower latent space. We leveraged UMAP for visualizing the embedded features in a 2-dimensional space.
From Fig 8, we can notice that there is no clear boundary between the clusters. Nevertheless, the data points at distant places can be assigned to different clusters. UMAP shows that the clusters are not widely separated (adjacent to each other). The encoder mapped the patches in this dataset within very little space and for this reason, when we visualized them in 2-dimensional space, they appeared the way they are seen from the UMAP plotting. UMAP is a novel manifold learning technique for dimension reduction. It has no computational restrictions on the embedding dimension, thus making it feasible for a general-purpose dimension reduction technique [52]. We reduced the dimension of embedded features down to 2 for both datasets.
Fig 8 exhibits the visualization of the embedded features of the 14-cluster set corresponding to 128×128 (part a) and 64×64 (part b) pixels patches.
Patch-based anomaly detection in WSI using the trained model
We carried out a patch-based anomaly detection by choosing one WSI randomly out of 191. Fig 9 reveals this specified WSI. We already mentioned that the size of each WSI is 15000×20000 (Height×Width) pixels. We created 128×128 pixels patches sequentially (without overlap) from this WSI which provided us with 18252 patches (from one WSI). After that, these patches were fed to the 14-cluster model for 128×128 pixels patches. Subsequently, we obtained cluster IDs for each of those patches and we used them to create a colormap of this particular WSI.
Fig 9 shows our randomly selected WSI with 5 staining techniques (HE, MT, CD31, CK19 and Ki67). We used it to create the 18252 patches without overlap of size 128×128.
From the histogram (part a) of Fig 10, we observe that 309 patches have been categorized as cancer cells (cluster 1); 140 are termed cancer cells with rich stroma (cluster 7). 154 samples (cancer cells) have gone to cluster 11. Clusters 2, 8, 9 and 12 have 857, 199, 1359 and 935 patches, respectively. Cluster 2 contains the liver and kidney; cluster 8 has the liver’s peripheral tissue; cluster 9 includes the liver center and cluster 12 holds liver tissue. Cluster 3 (blank) accumulated 12127 patches; cluster 6 (tissue gap) assembled 437 patches; cluster 10 (tissue stump) gathered 404 patches. 379 samples in cluster 4 denote intestinal mucosa and spleen. 462 samples in cluster 5 indicate vascular endothelium. 259 patches in cluster 13 portray dense small blood vessels. 231 specimens depict intestinal mucosa in cluster 14. We can see the positioning of these different patches in the colormap (part b) of our arbitrarily chosen WSI. In this way, we can understand a WSI’s overall condition from our experiment. The pathologist also confirmed that the pancreas region of this particular WSI has been heavily replaced by pancreatic cancer and there is no normal cell available in the pancreas of this WSI. In other terms, cancer cells in this WSI originated from the pancreas.
Fig 10 shows a colormap (part b) where different cluster IDs of 18252 patches have been indicated with designated colors. A histogram (part a) has also been shown which represents the frequency of the samples (among 18252 patches) in each cluster. The same procedure can be done using 64×64 pixels patches as well but here, we have only done it using 128×128 patches. We are repeating our earlier statement that the pathologist used the multi-stained representation of this particular WSI and its colormap to name various tissue features of the 14-cluster set using patches of 128×128 pixels.
Conclusion
In this study, we mainly performed pathological image analysis using whole-slide histopathology images of KPC mice. We leveraged deep learning methodology based on convolutional autoencoder architecture to categorize different tissue features of the pancreas in distinct clusters. At first, we randomly created small patches from whole-slide images (WSIs) which were prepared with 5 different staining techniques. We embedded these patches into an integrated latent space using our deep learning model. We utilized information maximization, an unsupervised clustering technique that accomplishes the task of separating different histological features into distinguishable clusters. We also visualized the clustering structure of the embedded features by employing UMAP in a 2-dimensional space. Additionally, we confirmed the optimal number of clusters using several internal cluster validation metrics. We also carried out patch-based anomaly detection in a WSI and represented different patches with specific colors in a colormap.
We used two different patch sizes for conducting this experiment: 128×128 and 64×64 pixels. We obtained the 14-cluster set as the optimal number of clusters for both patch sizes. We obtained the names of different types of tissue features for 128×128 pixels patches after consulting with a pathologist. They have recognizable properties on their own. The optimal number of clusters obtained by internal validation indices indicates data in these particular cluster sets (14 in both cases) are more similar to each other than other cluster sets.
Deep learning algorithms are referred as “black boxes” and it’s often unclear why a particular algorithm works. Hence, a way is required for AI to present information to experts on which to base its predictions, and for expert personnel to verify the outputs of the AI [53]. Using such human-AI interfaces, medical practitioners will be able to make more credible decisions with a high level of understanding.
The significance of this research is that it can carry out patch-based anomaly detection in whole-slide images. It can assist professionals by detecting cancer in the WSIs. Besides, computer-aided diagnosis (CAD) plays an important role in helping medical practitioners make diagnostic decisions.
- The code for this work can be found at this address: https://github.com/randomaccess2023/KPC_IM_128_64.
- The datasets (128×128 and 64×64 pixels) can be located here: https://doi.org/10.6084/m9.figshare.24129360.
- The dataset obtained from sequential patch creation can be accessed from this link: https://doi.org/10.6084/m9.figshare.24137553.
Limitations
We already mentioned that we randomly created 15000 samples of two different patch sizes (128×128 and 64×64 pixels) from 191 WSIs. After analyzing the clustering results, we acquired 14-cluster set as optimal (using internal validation indices) in the case of both 128×128 and 64×64 pixels patches. However, since we created these patches randomly, we cannot guarantee whether our dataset contains all the distinctive tissue features that can be extracted from the WSIs. Furthermore, the patches used in this experiment do not have annotation and therefore, external cluster validation is not possible in this research.
Supporting information
S1 Table. Summary of encoder, classifier and decoder section.
https://doi.org/10.1371/journal.pdig.0000391.s001
(PDF)
S2 Fig. Original vs. transformed image (128×128).
https://doi.org/10.1371/journal.pdig.0000391.s003
(PDF)
S4 Fig. Information maximization output for the 14-cluster set (64×64).
https://doi.org/10.1371/journal.pdig.0000391.s005
(PDF)
S5 Fig. Information maximization output of unused samples (128×128) during training using the 14-cluster set model.
https://doi.org/10.1371/journal.pdig.0000391.s006
(PDF)
References
- 1. Litjens G, Sánchez CI, Timofeeva N, Hermsen M, Nagtegaal I, Kovacs I, et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific reports. 2016 May 23;6(1):26286. pmid:27212078
- 2. Luo W, Tao J, Zheng L, Zhang T. Current epidemiology of pancreatic cancer: Challenges and opportunities. Chinese Journal of Cancer Research. 2020 Dec 12;32(6):705. pmid:33446994
- 3. Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians. 2018 Nov;68(6):394–424. pmid:30207593
- 4. Jagadeesan B, Haran PH, Praveen D, Chowdary PR, Aanandhi MV. A comprehensive review on pancreatic cancer. Research Journal of Pharmacy and Technology. 2021;14(1):552–4.
- 5. Menini S, Iacobini C, Vitale M, Pesce C, Pugliese G. Diabetes and pancreatic cancer–A dangerous liaison relying on carbonyl stress. Cancers. 2021 Jan 16;13(2):313. pmid:33467038
- 6.
Stewart BW, Kleihues P, editors. World cancer report. Lyon: IARC press; 2003 Mar.
- 7. Rawla P, Sunkara T, Gaduputi V. Epidemiology of pancreatic cancer: global trends, etiology and risk factors. World journal of oncology. 2019 Feb;10(1):10. pmid:30834048
- 8. Hu JX, Zhao CF, Chen WB, Liu QC, Li QW, Lin YY, et al. Pancreatic cancer: A review of epidemiology, trend, and risk factors. World journal of gastroenterology. 2021 Jul 7; 27(27):4298. pmid:34366606
- 9. Kriegsmann M, Kriegsmann K, Steinbuss G, Zgorzelski C, Kraft A, Gaida MM. Deep learning in pancreatic tissue: Identification of anatomical structures, pancreatic intraepithelial neoplasia, and ductal adenocarcinoma. International Journal of Molecular Sciences. 2021 May 20;22(10):5385. pmid:34065423
- 10. Dromain C, Boyer B, Ferre R, Canale S, Delaloge S, Balleyguier C. Computed-aided diagnosis (CAD) in the detection of breast cancer. European journal of radiology. 2013 Mar 1;82(3):417–23. pmid:22939365
- 11. Ozkan M, Cakiroglu M, Kocaman O, Kurt M, Yilmaz B, Can G, et al. Age-based computer-aided diagnosis approach for pancreatic cancer on endoscopic ultrasound images. Endoscopic Ultrasound. 2016 Mar 5;5(2):101. pmid:27080608
- 12. Yanase J, Triantaphyllou E. A systematic survey of computer-aided diagnosis in medicine: Past and present developments. Expert Systems with Applications. 2019 Dec 30;138:112821.
- 13. Huang B, Huang H, Zhang S, Zhang D, Shi Q, Liu J, et al. Artificial intelligence in pancreatic cancer. Theranostics. 2022;12(16):6931. pmid:36276650
- 14. Halalli B, Makandar A. Computer aided diagnosis-medical image analysis techniques. Breast imaging. 2018 Jan 17;85:85–109.
- 15. Fu H, Mi W, Pan B, Guo Y, Li J, Xu R, et al. Automatic pancreatic ductal adenocarcinoma detection in whole slide images using deep convolutional neural networks. Frontiers in oncology. 2021 Jun 25;11:665929. pmid:34249702
- 16.
Giger ML, Suzuki K. Computer-aided diagnosis. Academic Press. In: Biomedical information technology; 2008 Jan 1. pp. 359–XXII.
- 17. Doi K. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized medical imaging and graphics. 2007 Jun 1;31(4–5):198–211. pmid:17349778
- 18. Kim M, Yun J, Cho Y, Shin K, Jang R, Bae HJ, et al. Deep learning in medical imaging. Neurospine. 2019 Dec;16(4):657. pmid:31905454
- 19. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. Journal of Big data. 2019 Dec;6(1):1–8.
- 20.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Springer International Publishing. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18 2015 (pp. 234–241).
- 21.
He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision 2017 (pp. 2961–2969).
- 22.
Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition 2017 (pp. 1125–1134).
- 23. Yang Q, Li N, Zhao Z, Fan X, Chang EI, Xu Y. MRI cross-modality image-to-image translation. Scientific reports. 2020 Feb 28;10(1):3753. pmid:32111966
- 24. Obermeyer Z, Emanuel EJ. Predicting the future–big data, machine learning, and clinical medicine. The New England journal of medicine. 2016 Sep 9;375(13):1216. pmid:27682033
- 25. Verbeke C. Morphological heterogeneity in ductal adenocarcinoma of the pancreas–Does it matter?. Pancreatology. 2016 May 1;16(3):295–301. pmid:26924665
- 26. Hameed Z, Garcia-Zapirain B, Aguirre JJ, Isaza-Ruget MA. Multiclass classification of breast cancer histopathology images using multilevel features of deep convolutional neural network. Scientific Reports. 2022 Sep 16;12(1):15600. pmid:36114214
- 27. Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Medical Image Analysis. 2021 Jan 1;67:101813. pmid:33049577
- 28. Hu B, Tang Y, Eric I, Chang C, Fan Y, Lai M, et al. Unsupervised learning for cell-level visual representation in histopathology images with generative adversarial networks. IEEE journal of biomedical and health informatics. 2018 Jul 3;23(3):1316–28. pmid:29994411
- 29. Lee J, Warner E, Shaikhouni S, Bitzer M, Kretzler M, Gipson D, et al. Unsupervised machine learning for identifying important visual features through bag-of-words using histopathology data from chronic kidney disease. Scientific Reports. 2022 Mar 22;12(1):4832. pmid:35318420
- 30. Yan J, Chen H, Li X, Yao J. Deep contrastive learning based tissue clustering for annotation-free histopathology image analysis. Computerized Medical Imaging and Graphics. 2022 Apr 1;97:102053. pmid:35306442
- 31. Vohra R, Park J, Wang YN, Gravelle K, Whang S, Hwang JH, et al. Evaluation of pancreatic tumor development in KPC mice using multi-parametric MRI. Cancer Imaging. 2018 Dec;18(1):1–1. pmid:30409175
- 32. Pintelas E, Livieris IE, Pintelas PE. A convolutional autoencoder topology for classification in high-dimensional noisy image datasets. Sensors. 2021 Nov 20;21(22):7731. pmid:34833805
- 33. Zhang Y, de Haan K, Rivenson Y, Li J, Delis A, Ozcan A. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue. Light: Science & Applications. 2020 May 6;9(1):78. pmid:32411363
- 34. Alturkistani HA, Tashkandi FM, Mohammedsaleh ZM. Histological stains: a literature review and case study. Global journal of health science. 2016 Mar;8(3):72.
- 35.
Hu W, Miyato T, Tokui S, Matsumoto E, Sugiyama M. Learning discrete representations via information maximizing self-augmented training. In: International conference on machine learning 2017 Jul 17 (pp. 1558–1567). PMLR.
- 36.
Asano K, Ono N, Iwamoto C, Ohuchida K, Shindo K, Kanaya S. Feature extraction and cluster analysis of pancreatic pathological image based on unsupervised convolutional neural network. In: 2018 International Conference on Bioinformatics and Biomedicine (BIBM) 2018 Dec 3 (pp. 2738–2740). IEEE.
- 37.
Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, et al. Learning deep representations by mutual information estimation and maximization. arXiv:1808.06670 [Preprint]. 2018 [cited 2018 Aug 20].
- 38.
Varshini AP, Kumari KA, Janani D, Soundariya S. Comparative analysis of machine learning and deep learning algorithms for software effort estimation. In: Journal of Physics: Conference Series 2021 Feb 1 (Vol. 1767, No. 1, p. 012019). IOP Publishing.
- 39.
Shlens J. Notes on Kullback-Leibler divergence and likelihood. arXiv:1404.2000 [Preprint]. 2014 [cited 2014 Apr 8].
- 40. Young SR, Rose DC, Karnowski TP, Lim SH, Patton RM. Optimizing deep learning hyper-parameters through an evolutionary algorithm. In: Proceedings of the workshop on machine learning in high-performance computing environments 2015 Nov 15 (pp. 1–5).
- 41.
Zeiler MD. Adadelta: an adaptive learning rate method. arXiv:1212.5701 [Preprint]. 2012 [cited 2012 Dec 22].
- 42.
Liu Y, Li Z, Xiong H, Gao X, Wu J. Understanding of internal clustering validation measures. In: 2010 IEEE international conference on data mining 2010 Dec 13 (pp. 911–916). IEEE.
- 43. Rendón E, Abundez I, Arizmendi A, Quiroz EM. Internal versus external cluster validation indexes. International Journal of computers and communications. 2011 Mar;5(1):27–34.
- 44.
Moulavi D, Jaskowiak PA, Campello RJ, Zimek A, Sander J. Density-based clustering validation. In: Proceedings of the 2014 SIAM international conference on data mining 2014 Apr 18 (pp. 839–847). Society for Industrial and Applied Mathematics.
- 45.
Lathief MF, Soesanti I, Permanasari AE. Combination of Fuzzy C-Means, Xie-Beni Index, and Backpropagation Neural Network for Better Forecasting Result. Science and Technology Publications: Setubal, Portugal. 2020:72–7.
- 46. Xie XL, Beni G. A validity measure for fuzzy clustering. IEEE Transactions on Pattern Analysis & Machine Intelligence. 1991 Aug 1;13(08):841–7.
- 47. Saitta S, Raphael B, Smith IF. A comprehensive validity index for clustering. Intelligent Data Analysis. 2008 Jan 1;12(6):529–48.
- 48.
Ansari Z, Azeem M, Ahmed W, Babu AV. Quantitative evaluation of performance and validity indices for clustering the web navigational sessions. arXiv:1507.03340 [Preprint]. 2015 [cited 2015 Jul 13].
- 49.
Hartigan JA. Clustering Algorithms (Wiley series in probability and mathematical statistics). 1st ed. John Wiley and Sons; 1975.
- 50.
Palacio-Niño JO, Berzal F. Evaluation metrics for unsupervised learning algorithms. arXiv:1905.05667 [Preprint]. 2019 [cited 2019 May 14].
- 51. Szymoński K, Milian-Ciesielska K, Lipiec E, Adamek D. Current pathology model of pancreatic cancer. Cancers. 2022 May 7;14(9):2321. pmid:35565450
- 52.
McInnes L, Healy J, Melville J. Umap: uniform manifold approximation and projection for dimension reduction. arXiv:1802.03426 [Preprint]. 2018 [cited 2018 Feb 9].
- 53. Plass M, Kargl M, Kiehl TR, Regitnig P, Geißler C, Evans T, et al. Explainability and causability in digital pathology. The Journal of Pathology: Clinical Research. 2023 Apr 12;9(4):251–60. pmid:37045794