All Practice Questions
Static Testing1

Match each type of attack vector against an LLM (1-4) with the corresponding example (A-D): 1. Data exfiltration 2. Request manipulation 3. Data poisoning 4. Malicious code generation A. An attacker maliciously modifies the data associated with traceability links between requirements and test cases into the dataset used for fine-tuning an LLM, compromising its accuracy in generating test cases from requirements. B. An attacker maliciously crafts and provides deceptive prompts that induce an LLM, fine-tuned to assist testers in automated test script generation, to produce vulnerable test scripts with hidden security flaws. C. An attacker maliciously provides large specially crafted prompts that induce an LLM, fine-tuned to assist testers in generating test cases, to accidentally reveal confidential API keys inherited from past test projects. D. An attacker maliciously submits carefully modified reference screenshots into a visual testing framework that uses an LLM for comparative visual analysis, to trick the LLM into systematically ignoring genuine UI issues during regression testing.

A1C, 2D, 3A, 4B
B1B, 2D, 3A, 4C
C1D, 2C, 3B, 4A
D1C, 2B, 3D, 4A

Ace Your ISTQB Exam

Access 373+ questions, AI-powered explanations, mock exams, and personalized study plans.