Rajan Gyawali
Computer Science Doctoral Student
Bioinformatics and Machine Learning Lab
University of Missouri - Columbia
Address: Columbia, Missouri
Date of Birth: May 24, 1993
Email: rajangyawali.np@gmail.com, rgkg2@umsystem.edu
Phone: +1 573-639-7487
RESEARCH INTERESTS
Machine Learning, Deep Learning
Computer Vision, Bioinformatics
TOP SKILLS
Python, Django, REST API
Data Analytics and Visualizations
Matplotlib, Plotly, Dash
Numpy, Pandas
PostgreSQL
Javascript
Git
My research interests include Machine Learning, Deep Learning, Computer Vision and Bioinformatics.
Pulchowk Campus, Tribhuvan University, Lalitpur, Nepal | 2017 - 2019
MSc. Information and Communication Engineering
Worked as a research student under the supervision of Dr. Dibakar Raj Pant, Pulchowk Campus.
- Percentage: 90.13%
- Major Subjects: Color Image Technology, Image Processing, Machine Learning & Pattern Recognition
- Project: Convolutional Neural Networks for Multiclass Face Recognition Abstract
- Thesis: Employee Face Recognition by Region Proposal Networks and Faster R-CNN Abstract
A face recognition system based on Convolutional Neural Network is developed. The network contains five convolutional layers, each layer followed by the Rectified linear Unit and Max Pooling layer. Besides, it contains
two fully connected layers and a Softmax classifier. The convolutional network extracts successively larger features in a hierarchical set of layers. A database of 13000 images of 5749 individuals is used which
contains quite a higher degree of variability in expression, pose and facial details. The system can classify the images with an accuracy of 96.44%.
Keywords: convolutional neural network, face recognition, classification
Read Full Project Report
Face recognition is becoming popular in companies, supermarkets, hospitals etc. for security systems, human machine interaction and video surveillances. Employee face recognition is required to differentiate between
employees and non-employees. Face recognition is a challenging task. The traditional machine learning algorithms like Principal Component Analysis, Support Vector Machines, etc. rely on image-based features such
as edges and texture descriptors. In the recent trends, the Convolutional Neural Networks and deep learning algorithms have shown greater performance in face recognition. This thesis work uses region proposal network
(RPN) to localize region of interests (faces) from the image and uses Faster R-CNN to output the region proposals’ labels and bounding box associated with them. The proposed system consists of three sections. The
first section uses CNN for features extraction. From these features, the second section generates region proposals using RPN. The third section classifies these region proposals using faster R-CNN and the employee
face is recognized. The recognized face has a size of 128x128. The accuracy of the model is 96.0% in recognition of employees from Chokepoint dataset. The model is further tested with recorded employees’ dataset
of Nepal Telecom and shows an accuracy of 95.2%. The performance of the proposed method is evaluated on these datasets using confusion matrix. Further, visual and comprehensive evaluation using receiver operating
characteristics curve for these datasets shows a clear distinction between employees and non-employees.
Keywords: Employee Face Recognition, Region Proposal Networks, Convolutional Neural Network,
Faster R-CNN
Read Full Thesis Report
Himalaya College of Engineering, Tribhuvan University, Lalitpur, Nepal | 2012 - 2016
BE Electronics and Communication Engineering
- Percentage: 82.06%
- Major Subjects: Discrete Mathematics, Artificial Intelligence, Data Mining, Big Data Technologies, Analog and Digital Signal Processing, Computer Networks, Communication Systems
- Minor Project: Smart Irrigation System
- Final Year Project: Brain Controlled Wheelchair Abstract
Brain Controlled Wheelchair, an application of Brain Computer Interface
(BCI) uses NeuroSky Mindwave for signal acquisition from the brain. BCI allows direct communication between a computer and brain bypassing the body’s normal neuromuscular pathway. The wheel chair is aimed for
the physically impaired people. Brain Controlled Wheelchair directly measures brain activity associated with the user’s intent and translates the recorded brain activity into corresponding control signals for certain
applications. The signals recorded by the system are processed and classified to recognize the intent of the user.
Independent mobility is core to being able to perform activities of daily living by oneself.
Millions of people around the world suffer from mobility impairments and hundreds of thousands of them rely upon powered wheelchairs to get on with their activities of daily living. However, patients are unable
to control the powered wheel chairs using a conventional interface. Hence, a non–invasive brain computer interface (BCI) offer a promising solution to this interaction problem.
Keywords:Brain Computer Interface, non-invasive, disabled, wheelchair
Read Full Project Report
Abstract
Picking protein particles in cryo-electron microscopy (cryo-EM) micrographs is a crucial step in the cryo-EM-based structure determination. However, existing methods trained on a limited amount of cryo-EM data still cannot accurately pick protein particles from noisy cryo-EM images. The general foundational artificial intelligence–based image segmentation model such as Meta’s Segment Anything Model (SAM) cannot segment protein particles well because their training data do not include cryo-EM images. Here, we present a novel approach (CryoSegNet) of integrating an attention-gated U-shape network (U-Net) specially designed and trained for cryo-EM particle picking and the SAM. The U-Net is first trained on a large cryo-EM image dataset and then used to generate input from original cryo-EM images for SAM to make particle pickings. CryoSegNet shows both high precision and recall in segmenting protein particles from cryo-EM micrographs, irrespective of protein type, shape and size. On several independent datasets of various protein types, CryoSegNet outperforms two top machine learning particle pickers crYOLO and Topaz as well as SAM itself. The average resolution of density maps reconstructed from the particles picked by CryoSegNet is 3.33 Å, 7% better than 3.58 Å of Topaz and 14% better than 3.87 Å of crYOLO. It is publicly available at https://github.com/jianlin-cheng/CryoSegNet
Read Full Article
Abstract
Motivation
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structures of large protein complexes. Picking single protein particles from cryo-EM micrographs (images) is a crucial step in reconstructing protein structures from them. However, the widely used template-based particle picking process requires some manual particle picking and is labor-intensive and time-consuming. Though machine learning and artificial intelligence (AI) can potentially automate particle picking, the current AI methods pick particles with low precision or low recall. The erroneously picked particles can severely reduce the quality of reconstructed protein structures, especially for the micrographs with low signal-to-noise ratio.
Results
To address these shortcomings, we devised CryoTransformer based on transformers, residual networks, and image processing techniques to accurately pick protein particles from cryo-EM micrographs. CryoTransformer was trained and tested on the largest labeled cryo-EM protein particle dataset—CryoPPP. It outperforms the current state-of-the-art machine learning methods of particle picking in terms of the resolution of 3D density maps reconstructed from the picked particles as well as F1-score, and is poised to facilitate the automation of the cryo-EM protein particle picking.
Availability and implementation
The source code and data for CryoTransformer are openly available at: https://github.com/jianlin-cheng/CryoTransformer.
Read Full Article
Abstract
Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structures of biological macromolecular complexes. Picking single-protein particles from cryo-EM micrographs is a crucial step in reconstructing protein structures. However, the widely used template-based particle picking process is labor-intensive and time-consuming. Though machine learning and artificial intelligence (AI) based particle picking can potentially automate the process, its development is hindered by lack of large, high-quality labelled training data. To address this bottleneck, we present CryoPPP, a large, diverse, expert-curated cryo-EM image dataset for protein particle picking and analysis. It consists of labelled cryo-EM micrographs (images) of 34 representative protein datasets selected from the Electron Microscopy Public Image Archive (EMPIAR). The dataset is 2.6 terabytes and includes 9,893 high-resolution micrographs with labelled protein particle coordinates. The labelling process was rigorously validated through 2D particle class validation and 3D density map validation with the gold standard. The dataset is expected to greatly facilitate the development of both AI and classical methods for automated cryo-EM protein particle picking.
Read Full Article
Abstract
Face recognition is becoming popular in companies, supermarkets, hospitals etc. for security systems, human machine interaction and video surveillances. Employee face recognition is required to differentiate between employees
and non-employees. Face recognition is a challenging task. The traditional machine learning algorithms like Principal Component Analysis, Support Vector Machines, etc. rely on image-based features such as edges and
texture descriptors. In the recent trends, the Convolutional Neural Networks (CNN) and deep learning algorithms have shown greater performance in face recognition. In this article, region proposal network (RPN) is used
to localize region of interests (faces) from the image and Faster R-CNN to output the region proposals’ labels along with their associated bounding box. The proposed system consists of three sections. The first section
uses CNN for features extraction. From these features, the second section generates region proposals using RPN. The third section classifies these region proposals using faster R-CNN and the employee face is recognized.
The accuracy of the model is 96.0% in recognition of Chokepoint employee dataset. The model is further tested with Nepal Telecom employee dataset and shows an accuracy of 95.2%. The performance of the proposed method
is evaluated on these datasets using confusion matrix. Further, visual and comprehensive evaluation using receiver operating characteristics curve for these datasets shows a clear distinction between employees and non-employees.
Read Full Article