I lead the Practical and Actionable Software Engineering Research (PASER) group at Tennessee Tech University. The PASER group is currently focusing on the following research projects:

eSLIC: Enhanced Security Static Analysis for Configuration Scripts


Abstract: The project will focus on the development of techniques and tools that will automatically detect security weaknesses in configuration scripts developed using a wide range of languages heavily used in industry. Three main tasks will be investigated for this project. First, qualitative analysis is applied in order to determine a comprehensive list of security weaknesses for multiple configuration script languages, and devise static analysis techniques for automatically identifying each category of security weakness. Next, grammar-based parsing and machine learning techniques are applied, evaluated, and integrated into the derived static analysis so that false positives are reduced. Finally, the development context of practitioners from the open source and proprietary domain will be systematically mined to generate actionable alerts and suggestions, which will enable practitioners to fix security weaknesses. Along with the three technical tasks, industry panels will be organized, where practitioners from industry will give feedback on the developed techniques and tools. Findings from the project will be disseminated to government, industry and open source practitioners, as well as to students who are learning about configuration management in graduate and undergraduate level courses related to cybersecurity
Datasets and Software: As part of this project three tools with datasets have been developed and made open source:

Funding Source: NSF Award # 2026869

Secure Development of Machine Learning


Abstract: The ubiquitous use of machine learning necessitates to make the development process secure. In this project we are exploring so strategies used to secure the development process for traditional software engineering can also be integrated in machine learning development. In particular, we are investigating how to perform logging to diagnose adversarial machine learning attacks and formally verify security specifications in machine learning implementations.
Datasets and Software:

To be available soon.


Building the Foundations of Validation for the Julia Ecosystem


Abstract: Software validation activities, such as software testing and static analysis, are resource-consuming. Insights on what categories of defects appear in software can help practitioners understand the nature of software defects and prioritize validation activities accordingly. Defect categorization can provide insights on the nature of defects that appear in Julia programs and how tools related to validation can be constructed. Julia is an emerging programming language that was designed to provide programming syntax similar to that of scripting languages, such as Python, with similar program execution speed of compiled languages with low-level memory access, such as C. According to a survey of Stack Overflow users in 2020, Julia is considered as one of the top 10 most loved programming languages. The popularity of Julia as a programming language motivates us to construct validation tools, such as fuzzing tools and static analysis tools so that potential defects can be mitigated early at the development stage. We are applying dynamic analysis tools, such as the American Fuzzy Lop to find bugs in the Julia compiler. We are also creating a security static analysis tool to find security weaknesses in Julia programs. As of today, no security static analysis tools exist for Julia programs.
Datasets and Software:

To be available soon.