This page has only limited features, please log in for full access.

Prof. Dr. Kazuhiko Hamamoto
Tokai University

Basic Info


Research Keywords & Expertise

0 Cognitive Science
0 Human Interface Design
0 Medical Image Processing
0 virtual reality environment
0 Virtual reality applications

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Kazuhiko Hamamoto is currently a professor at the Department of Information Media Technology, School of Information and Telecommunication Engineering, Tokai University, Japan. He has been a Dean of the school since 2017. He received his BEng., MEng., and DEng degrees from Tokyo University of Agriculture and Technology in 1989, 1991, and 1994, respectively. His research lies in the area of medical information, human interface design and virtual reality. He published around 60 journal/transaction papers and more than 85 international conference papers. He is a member of IEEE and many national societies in Japan.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 10 March 2021 in Sensors
Reads 0
Downloads 0

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.

ACS Style

May Paing; Supan Tungjitkusolmun; Toan Bui; Sarinporn Visitsattapongse; Chuchart Pintavirooj. Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning. Sensors 2021, 21, 1952 .

AMA Style

May Paing, Supan Tungjitkusolmun, Toan Bui, Sarinporn Visitsattapongse, Chuchart Pintavirooj. Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning. Sensors. 2021; 21 (6):1952.

Chicago/Turabian Style

May Paing; Supan Tungjitkusolmun; Toan Bui; Sarinporn Visitsattapongse; Chuchart Pintavirooj. 2021. "Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning." Sensors 21, no. 6: 1952.

Journal article
Published: 24 February 2021 in Applied Sciences
Reads 0
Downloads 0

Caries is the most well-known disease and relates to the oral health of billions of people around the world. Despite the importance and necessity of a well-designed detection method, studies in caries detection are still limited and show a restriction in performance. In this paper, we proposed a computer-aided diagnosis (CAD) method to detect caries among normal patients using dental radiographs. The proposed method mainly consists of two processes: feature extraction and classification. In the feature extraction phase, the chosen 2D tooth image was employed to extract deep activated features using a deep pre-trained model and geometric features using mathematic formulas. Both feature sets were then combined, called fusion feature, to complement each other defects. Then, the optimal fusion feature set was fed into well-known classification models such as support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), Naïve Bayes (NB), and random forest (RF) to determine the best classification model that fit the fusion features set and perform the most preeminent result. The results show 91.70%, 90.43%, and 92.67% for accuracy, sensitivity, and specificity, respectively. The proposed method has outperformed the previous state-of-the-art and shows promising results when none of the measured factors is less than 90%; therefore, the method is promising for dentists and capable of wide-scale implementation caries detection in hospitals.

ACS Style

Toan Bui; Kazuhiko Hamamoto; May Paing. Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs. Applied Sciences 2021, 11, 2005 .

AMA Style

Toan Bui, Kazuhiko Hamamoto, May Paing. Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs. Applied Sciences. 2021; 11 (5):2005.

Chicago/Turabian Style

Toan Bui; Kazuhiko Hamamoto; May Paing. 2021. "Deep Fusion Feature Extraction for Caries Detection on Dental Panoramic Radiographs." Applied Sciences 11, no. 5: 2005.

Journal article
Published: 20 August 2020 in Applied Sciences
Reads 0
Downloads 0

Tuberculosis (TB) is a leading infectious killer, especially for people with Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency Syndrome (AIDS). Early diagnosis of TB is crucial for disease treatment and control. Radiology is a fundamental diagnostic tool used to screen or triage TB. Automated chest x-rays analysis can facilitate and expedite TB screening with fast and accurate reports of radiological findings and can rapidly screen large populations and alleviate a shortage of skilled experts in remote areas. We describe a hybrid feature-learning algorithm for automatic screening of TB in chest x-rays: it first segmented the lung regions using the DeepLabv3+ model. Then, six sets of hand-crafted features from statistical textures, local binary pattern, GIST, histogram of oriented gradients (HOG), pyramid histogram of oriented gradients and bags of visual words (BoVW), and nine sets of deep-activated features from AlexNet, GoogLeNet, InceptionV3, XceptionNet, ResNet-50, SqueezeNet, ShuffleNet, MobileNet, and DenseNet, were extracted. The dominant features of each feature set were selected using particle swarm optimization, and then separately input to an optimized support vector machine classifier to label ‘normal’ and ‘TB’ x-rays. GIST, HOG, BoVW from hand-crafted features, and MobileNet and DenseNet from deep-activated features performed better than the others. Finally, we combined these five best-performing feature sets to build a hybrid-learning algorithm. Using the Montgomery County (MC) and Shenzen datasets, we found that the hybrid features of GIST, HOG, BoVW, MobileNet and DenseNet, performed best, achieving an accuracy of 92.5% for the MC dataset and 95.5% for the Shenzen dataset.

ACS Style

Khin Yadanar Win; Noppadol Maneerat; Kazuhiko Hamamoto; Syna Sreng. Hybrid Learning of Hand-Crafted and Deep-Activated Features Using Particle Swarm Optimization and Optimized Support Vector Machine for Tuberculosis Screening. Applied Sciences 2020, 10, 5749 .

AMA Style

Khin Yadanar Win, Noppadol Maneerat, Kazuhiko Hamamoto, Syna Sreng. Hybrid Learning of Hand-Crafted and Deep-Activated Features Using Particle Swarm Optimization and Optimized Support Vector Machine for Tuberculosis Screening. Applied Sciences. 2020; 10 (17):5749.

Chicago/Turabian Style

Khin Yadanar Win; Noppadol Maneerat; Kazuhiko Hamamoto; Syna Sreng. 2020. "Hybrid Learning of Hand-Crafted and Deep-Activated Features Using Particle Swarm Optimization and Optimized Support Vector Machine for Tuberculosis Screening." Applied Sciences 10, no. 17: 5749.

Journal article
Published: 17 July 2020 in Applied Sciences
Reads 0
Downloads 0

Glaucoma is a major global cause of blindness. As the symptoms of glaucoma appear, when the disease reaches an advanced stage, proper screening of glaucoma in the early stages is challenging. Therefore, regular glaucoma screening is essential and recommended. However, eye screening is currently subjective, time-consuming and labor-intensive and there are insufficient eye specialists available. We present an automatic two-stage glaucoma screening system to reduce the workload of ophthalmologists. The system first segmented the optic disc region using a DeepLabv3+ architecture but substituted the encoder module with multiple deep convolutional neural networks. For the classification stage, we used pretrained deep convolutional neural networks for three proposals (1) transfer learning and (2) learning the feature descriptors using support vector machine and (3) building ensemble of methods in (1) and (2). We evaluated our methods on five available datasets containing 2787 retinal images and found that the best option for optic disc segmentation is a combination of DeepLabv3+ and MobileNet. For glaucoma classification, an ensemble of methods performed better than the conventional methods for RIM-ONE, ORIGA, DRISHTI-GS1 and ACRIMA datasets with the accuracy of 97.37%, 90.00%, 86.84% and 99.53% and Area Under Curve (AUC) of 100%, 92.06%, 91.67% and 99.98%, respectively, and performed comparably with CUHKMED, the top team in REFUGE challenge, using REFUGE dataset with an accuracy of 95.59% and AUC of 95.10%.

ACS Style

Syna Sreng; Noppadol Maneerat; Kazuhiko Hamamoto; Khin Yadanar Win. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Applied Sciences 2020, 10, 4916 .

AMA Style

Syna Sreng, Noppadol Maneerat, Kazuhiko Hamamoto, Khin Yadanar Win. Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images. Applied Sciences. 2020; 10 (14):4916.

Chicago/Turabian Style

Syna Sreng; Noppadol Maneerat; Kazuhiko Hamamoto; Khin Yadanar Win. 2020. "Deep Learning for Optic Disc Segmentation and Glaucoma Diagnosis on Retinal Images." Applied Sciences 10, no. 14: 4916.

Journal article
Published: 29 March 2020 in Applied Sciences
Reads 0
Downloads 0

The detection of pulmonary nodules on computed tomography scans provides a clue for the early diagnosis of lung cancer. Manual detection mandates a heavy radiological workload as it identifies nodules slice-by-slice. This paper presents a fully automated nodule detection with three significant contributions. First, an automated seeded region growing is designed to segment the lung regions from the tomography scans. Second, a three-dimensional chain code algorithm is implemented to refine the border of the segmented lungs. Lastly, nodules inside the lungs are detected using an optimized random forest classifier. The experiments for our proposed detection are conducted using 888 scans from a public dataset, and achieves a favorable result of 93.11% accuracy, 94.86% sensitivity, and 91.37% specificity, with only 0.0863 false positives per exam.

ACS Style

May Phu Paing; Kazuhiko Hamamoto; Supan Tungjitkusolmun; Sarinporn Visitsattapongse; Chuchart Pintavirooj. Automatic Detection of Pulmonary Nodules using Three-dimensional Chain Coding and Optimized Random Forest. Applied Sciences 2020, 10, 2346 .

AMA Style

May Phu Paing, Kazuhiko Hamamoto, Supan Tungjitkusolmun, Sarinporn Visitsattapongse, Chuchart Pintavirooj. Automatic Detection of Pulmonary Nodules using Three-dimensional Chain Coding and Optimized Random Forest. Applied Sciences. 2020; 10 (7):2346.

Chicago/Turabian Style

May Phu Paing; Kazuhiko Hamamoto; Supan Tungjitkusolmun; Sarinporn Visitsattapongse; Chuchart Pintavirooj. 2020. "Automatic Detection of Pulmonary Nodules using Three-dimensional Chain Coding and Optimized Random Forest." Applied Sciences 10, no. 7: 2346.

Journal article
Published: 05 March 2020 in Applied Sciences
Reads 0
Downloads 0

Cervical cancer can be prevented by having regular screenings to find any precancers and treat them. The Pap test looks for any abnormal or precancerous changes in the cells on the cervix. However, the manual screening of Pap smear in the microscope is subjective with poorly reproducible criteria. Therefore, the aim of this study was to develop a computer-assisted screening system for cervical cancer using digital image processing of Pap smear images. The analysis of Pap smear image is important in the cervical cancer screening system. There were four basic steps in our cervical cancer screening system. In cell segmentation, nuclei were detected using a shape-based iterative method, and the overlapping cytoplasm was separated using a marker-control watershed approach. In the features extraction step, three important features were extracted from the regions of segmented nuclei and cytoplasm. RF (random forest) algorithm was used as a feature selection method. In the classification stage, bagging ensemble classifier, which combined the results of five classifiers—LD (linear discriminant), SVM (support vector machine), KNN (k-nearest neighbor), boosted trees, and bagged trees—was applied. SIPaKMeD and Herlev datasets were used to prove the effectiveness of our proposed system. According to the experimental results, 98.27% accuracy in two-class classification and 94.09% accuracy in five-class classification was achieved using the SIPaKMeD dataset. When the results were compared with five classifiers, our proposed method was significantly better in two-class and five-class problems.

ACS Style

Kyi Pyar Win; Yuttana Kitjaidure; Kazuhiko Hamamoto; Thet Myo Aung. Computer-Assisted Screening for Cervical Cancer Using Digital Image Processing of Pap Smear Images. Applied Sciences 2020, 10, 1800 .

AMA Style

Kyi Pyar Win, Yuttana Kitjaidure, Kazuhiko Hamamoto, Thet Myo Aung. Computer-Assisted Screening for Cervical Cancer Using Digital Image Processing of Pap Smear Images. Applied Sciences. 2020; 10 (5):1800.

Chicago/Turabian Style

Kyi Pyar Win; Yuttana Kitjaidure; Kazuhiko Hamamoto; Thet Myo Aung. 2020. "Computer-Assisted Screening for Cervical Cancer Using Digital Image Processing of Pap Smear Images." Applied Sciences 10, no. 5: 1800.

Journal article
Published: 10 October 2019 in Sustainability
Reads 0
Downloads 0

This paper introduces the design and characterization of a double-stage energy harvesting floor tile that uses a piezoelectric cantilever to generate electricity from human footsteps. A frequency up-conversion principle, in the form of an overshooting piezoelectric cantilever, plucked with a proof mass is utilized to increase energy conversion efficiency. The overshoot of the proof mass is implemented by a mechanical impact between a moving cover plate and a stopper to prevent damage to the plucked piezoelectric element. In an experiment, the piezoelectric cantilever of a floor tile prototype was excited by a pneumatic actuator that simulated human footsteps. The key parameters affecting the electrical power and energy outputs were investigated by actuating the prototype with a few kinds of excitation input. It was found that, when actuated by a single simulated footstep, the prototype was able to produce electrical power and energy in two stages. The cantilever resonated at a frequency of 14.08 Hz. The output electricity was directly proportional to the acceleration of the moving cover plate and the gap between the cover plate and the stopper. An average power of 0.82 mW and a total energy of 2.40 mJ were obtained at an acceleration of 0.93 g and a gap of 4 mm. The prototype had a simple structure and was able to operate over a wide range of frequencies.

ACS Style

Don Isarakorn; Subhawat Jayasvasti; Phosy Panthongsy; Pattanaphong Janphuang; Kazuhiko Hamamoto. Design and Evaluation of Double-Stage Energy Harvesting Floor Tile. Sustainability 2019, 11, 5582 .

AMA Style

Don Isarakorn, Subhawat Jayasvasti, Phosy Panthongsy, Pattanaphong Janphuang, Kazuhiko Hamamoto. Design and Evaluation of Double-Stage Energy Harvesting Floor Tile. Sustainability. 2019; 11 (20):5582.

Chicago/Turabian Style

Don Isarakorn; Subhawat Jayasvasti; Phosy Panthongsy; Pattanaphong Janphuang; Kazuhiko Hamamoto. 2019. "Design and Evaluation of Double-Stage Energy Harvesting Floor Tile." Sustainability 11, no. 20: 5582.

Journal article
Published: 06 June 2019 in Applied Sciences
Reads 0
Downloads 0

Lung cancer is a life-threatening disease with the highest morbidity and mortality rates of any cancer worldwide. Clinical staging of lung cancer can significantly reduce the mortality rate, because effective treatment options strongly depend on the specific stage of cancer. Unfortunately, manual staging remains a challenge due to the intensive effort required. This paper presents a computer-aided diagnosis (CAD) method for detecting and staging lung cancer from computed tomography (CT) images. This CAD works in three fundamental phases: segmentation, detection, and staging. In the first phase, lung anatomical structures from the input tomography scans are segmented using gray-level thresholding. In the second, the tumor nodules inside the lungs are detected using some extracted features from the segmented tumor candidates. In the last phase, the clinical stages of the detected tumors are defined by extracting locational features. For accurate and robust predictions, our CAD applies a double-staged classification: the first is for the detection of tumors and the second is for staging. In both classification stages, five alternative classifiers, namely the Decision Tree (DT), K-nearest neighbor (KNN), Support Vector Machine (SVM), Ensemble Tree (ET), and Back Propagation Neural Network (BPNN), are applied and compared to ensure high classification performance. The average accuracy levels of 92.8% for detection and 90.6% for staging are achieved using BPNN. Experimental findings reveal that the proposed CAD method provides preferable results compared to previous methods; thus, it is applicable as a clinical diagnostic tool for lung cancer.

ACS Style

May Phu Paing; Kazuhiko Hamamoto; Supan Tungjitkusolmun; Chuchart Pintavirooj. Automatic Detection and Staging of Lung Tumors using Locational Features and Double-Staged Classifications. Applied Sciences 2019, 9, 2329 .

AMA Style

May Phu Paing, Kazuhiko Hamamoto, Supan Tungjitkusolmun, Chuchart Pintavirooj. Automatic Detection and Staging of Lung Tumors using Locational Features and Double-Staged Classifications. Applied Sciences. 2019; 9 (11):2329.

Chicago/Turabian Style

May Phu Paing; Kazuhiko Hamamoto; Supan Tungjitkusolmun; Chuchart Pintavirooj. 2019. "Automatic Detection and Staging of Lung Tumors using Locational Features and Double-Staged Classifications." Applied Sciences 9, no. 11: 2329.

Journal article
Published: 11 September 2018 in Applied Sciences
Reads 0
Downloads 0

Due to the close resemblance between overlapping and cancerous nuclei, the misinterpretation of overlapping nuclei can affect the final decision of cancer cell detection. Thus, it is essential to detect overlapping nuclei and distinguish them from single ones for subsequent quantitative analyses. This paper presents a method for the automated detection and classification of overlapping nuclei from single nuclei appearing in cytology pleural effusion (CPE) images. The proposed system is comprised of three steps: nuclei candidate extraction, dominant feature extraction, and classification of single and overlapping nuclei. A maximum entropy thresholding method complemented by image enhancement and post-processing was employed for nuclei candidate extraction. For feature extraction, a new combination of 16 geometrical and 10 textural features was extracted from each nucleus region. A double-strategy random forest was performed as an ensemble feature selector to select the most relevant features, and an ensemble classifier to differentiate between overlapping nuclei and single ones using selected features. The proposed method was evaluated on 4000 nuclei from CPE images using various performance metrics. The results were 96.6% sensitivity, 98.7% specificity, 92.7% precision, 94.6% F1 score, 98.4% accuracy, 97.6% G-mean, and 99% area under curve. The computation time required to run the entire algorithm was just 5.17 s. The experiment results demonstrate that the proposed algorithm yields a superior performance to previous studies and other classifiers. The proposed algorithm can serve as a new supportive tool in the automated diagnosis of cancer cells from cytology images.

ACS Style

Khin Yadanar Win; Somsak Choomchuay; Kazuhiko Hamamoto; Manasanan Raveesunthornkiat. Detection and Classification of Overlapping Cell Nuclei in Cytology Effusion Images Using a Double-Strategy Random Forest. Applied Sciences 2018, 8, 1608 .

AMA Style

Khin Yadanar Win, Somsak Choomchuay, Kazuhiko Hamamoto, Manasanan Raveesunthornkiat. Detection and Classification of Overlapping Cell Nuclei in Cytology Effusion Images Using a Double-Strategy Random Forest. Applied Sciences. 2018; 8 (9):1608.

Chicago/Turabian Style

Khin Yadanar Win; Somsak Choomchuay; Kazuhiko Hamamoto; Manasanan Raveesunthornkiat. 2018. "Detection and Classification of Overlapping Cell Nuclei in Cytology Effusion Images Using a Double-Strategy Random Forest." Applied Sciences 8, no. 9: 1608.

Journal article
Published: 22 July 2018 in Applied Sciences
Reads 0
Downloads 0

Diabetic Retinopathy (DR) is the leading cause of blindness in working-age adults globally. Primary screening of DR is essential, and it is recommended that diabetes patients undergo this procedure at least once per year to prevent vision loss. However, in addition to the insufficient number of ophthalmologists available, the eye examination itself is labor-intensive and time-consuming. Thus, an automated DR screening method using retinal images is proposed in this paper to reduce the workload of ophthalmologists in the primary screening process and so that ophthalmologists may make effective treatment plans promptly to help prevent patient blindness. First, all possible candidate lesions of DR were segmented from the whole retinal image using a combination of morphological-top-hat and Kirsch edge-detection methods supplemented by pre- and post-processing steps. Then, eight feature extractors were utilized to extract a total of 208 features based on the pixel density of the binary image as well as texture, color, and intensity information for the detected regions. Finally, hybrid simulated annealing was applied to select the optimal feature set to be used as the input to the ensemble bagging classifier. The evaluation results of this proposed method, on a dataset containing 1200 retinal images, indicate that it performs better than previous methods, with an accuracy of 97.08%, a sensitivity of 90.90%, a specificity of 98.92%, a precision of 96.15%, an F-measure of 93.45% and the area under receiver operating characteristic curve at 98.34%.

ACS Style

Syna Sreng; Noppadol Maneerat; Kazuhiko Hamamoto; Ronakorn Panjaphongse. Automated Diabetic Retinopathy Screening System Using Hybrid Simulated Annealing and Ensemble Bagging Classifier. Applied Sciences 2018, 8, 1198 .

AMA Style

Syna Sreng, Noppadol Maneerat, Kazuhiko Hamamoto, Ronakorn Panjaphongse. Automated Diabetic Retinopathy Screening System Using Hybrid Simulated Annealing and Ensemble Bagging Classifier. Applied Sciences. 2018; 8 (7):1198.

Chicago/Turabian Style

Syna Sreng; Noppadol Maneerat; Kazuhiko Hamamoto; Ronakorn Panjaphongse. 2018. "Automated Diabetic Retinopathy Screening System Using Hybrid Simulated Annealing and Ensemble Bagging Classifier." Applied Sciences 8, no. 7: 1198.