Sort by
Refine Your Search
-
refine, formalize, and extend an existing perturbation methodology to construct a principled, security-aware dataset of real-world vulnerable and secure JavaScript code. The work plan includes: 1
-
JavaScript code using contrastive learning with a tailored security-aware loss function. The student will fine-tune selected models using secure-insecure code pairs derived from Tasks 1 and 2 and evaluate
-
of Large Language Models (LLMs) in distinguishing secure from insecure JavaScript code. The student will design and implement a systematic evaluation pipeline to assess model behavior under perturbation
-
insecure JavaScript code from open-source repositories. Identify and label security-related commits using diff-based analysis. Integrate synthetic data generation (e.g., AST-based vulnerability injection