Natural Language Processing (NLP) models have become essential for handling tasks such as Question-Answering (QA). This is particularly true in domains that require high accuracy, like legal document analysis. This study evaluates the performance of the ELECTRA model in QA tasks by testing it on both Wikipedia-based and legal contract datasets, including the Contract Understanding Atticus Dataset (CUAD). An adversarial attack known as a universal adversarial trigger was then introduced. This trigger is designed to test the robustness of the ELECTRA model by introducing small perturbations that aim to confuse the model into making incorrect predictions. Our results show that the adversarial trigger reduces the model's performance by more than 10% on the well-known SQuAD dataset. This drop in accuracy highlights the vulnerability of the model to adversarial manipulation. On the legal contract dataset, specifically CUAD, we achieved a performance improvement of over 12% after fine-tuning the model with targeted optimizations. These optimizations included adjusting the model's hyperparameters and modifying the dataset to reduce ambiguity in question phrasing. These findings indicate that while ELECTRA performs well on general datasets, it struggles with the intricacies of legal texts. However, with specific optimizations and adjustments, the model can become more effective in legal document analysis.