A faculty candidate for the Data Visualization, Auralization, Haptics position is an Assistant Professor of Data Science in the Department of Mathematics and Statistics at the University of Houston Downtown, where he is also chairs of the Data Science program committee. The candidate received their PhD in Computational Sciences and Informatics from the George Mason University in 2020. The candidate's research is at the intersectionof Health Data Science, Computational Biophysics, and Interpretable Machine Learning enquiry across disciplines. Application domains are diverse; spanning health science, bioin-formatics, computational biology, engineering, epidemiology, material science, and more. In particular, the laboratory focuses on using molecular dynamics simulations to investigate, analyze, and characterize viral structural proteins, and applying interpretable machine learning techniques in solving problems. The candidate is also interested in Complex Data Visualization through the building of interactive dashboards and maps. 

Research Talk:  "Collaborative Approach between Molecular Dynamics and Interpretable Machine Learning to Explore Viral Structural Proteins."

This talk spans four projects from two broad but convergent research areas.

The first part presents a tertiary structure modeling of the Rift Valley fever virus (RVFV) L protein. Tertiary structures govern biological activities of proteins in living cells and consequently, a focus of numerous studies aiming at understanding cellular processes crucial to human health. This study elucidates the structure of RVFV L protein using a combination of in silico techniques. Due to its large size and multiple domains, elucidation of the tertiary structure has so far challenged both dry and wet labs. This work leverages complementary perspectives and tools from computational-molecular-biology and bioinformatics domains for constructing, refining, and evaluating atomistic physically realistic structural models. Structural characterization of the protein helps to better understand the viral replication and transcription via further protein-mediated protein–protein interactions studies.

In the second part, a further exploration of solvent effects on the C-terminal domain of the L protein was conducted. Many biological processes are controlled by protein–solvent and protein–protein interactions. In order to predict and optimize such processes, it is important to understand how solvents affect protein structure. Molecular dynamics (MD) simulation is used to investigate the structural dynamics and energetic properties of the domain in aqueous glycerol solutions of different concentrations. The study identified a region within hydrophobic pocket of aromatic amino acid side chains which had been discovered experimentally as bonding place for m7GTP molecule. This cap-binding region will be important in developing broad-spectrum inhibitors against this and other viruses from the Bunyavirales order.

The temperature effect of on SARS-CoV-2 nucleocapsid domain is presented in the third part of the talk. The coronavirus machinery is complex and may have other stabilizing mechanisms and protein-solvent interactions to cope with environmental difficulty that is brought forth during seasonal changes. This study aimed at understanding and explaining the sensitivity of SARS-CoV-2 nucleocapsid domain to different temperatures. In general, it was found that the protein stability increases with increasing temperature. Structural characterization of the atomic trajectories however did not offer clear insight into the structural dynamics. The results suggest that the SARS-CoV-2 protein is well adapted to habitable temperatures and exhibit extreme stability. On-going: Interpretable Machine Learning (ML), an emerging area of research, is aimed at unboxing ML algorithms. It inspects the measures of models involved in decision-making and seek solutions to explicitly explain them. Most algorithms, like deep neural networks, with high predictive power cannot demonstrate why and how decisions are made. As a result, human confidence in them can be a set back because of the lack of explainability. ML algorithms are becoming crucial in healthcare and medical research. However, the insufficient model interpretability, explainability of results and transparency in most existing algorithms is a major reason why successful implementation ML systems into routine clinical practice are uncommon. This fourth research uses Interpretable ML to classify Alzheimer’s disease data, compare model performance and provide interpretations using post hoc model-agnostic methods that are local in scope, such as local interpretable model-agnostic explanations (LIME), SHapley Additive exPlanations (SHAP), Anchors and Global interpretability such as Partial Dependence Plots (PDP’s), Accumulated Local Effects (ALE), Feature interaction (H-statistic) and Individual Conditional Expectation (ICE) plots.

 

Event Details

See Who Is Interested

0 people are interested in this event

Lehigh University Events Calendar Powered by the Localist Community Event Platform © All rights reserved