01. During automated data preparation, an AI system flags missing values and inconsistent formats across multiple data sources. This step occurs before analysis or modeling begins. Which task category is the AI primarily performing?
a) Feature engineering
b) Data quality checking and cleaning
c) Model evaluation
d) Prompt optimization
02. What is the primary role of a context window in generative AI systems?
a) Enforcing ethical constraints
b) Controlling model accuracy
c) Defining maximum memory for a single interaction
d) Limiting training data size
03. Security teams often restrict training or prompting AI models with sensitive customer data. This reduces risk but may limit usefulness. What is the primary trade-off being managed?
a) Performance versus privacy
b) Speed versus accuracy
c) Automation versus scalability
d) Cost versus latency
04. Why is explainability particularly important when AI systems support human decision-making? Decision-makers must trust and understand recommendations before acting on them.
a) It increases model speed
b) It enables informed oversight and challenge
c) It reduces infrastructure cost
d) It eliminates bias entirely
05. An AI practitioner is assessing threats unique to AI systems rather than traditional IT systems. They want to focus on risks introduced by generative capabilities. Which threat is AI-specific?
a) Network congestion
b) Hardware failure
c) Distributed denial-of-service
d) Prompt injection
06. Misinformation generated by AI systems poses reputational and operational risks. This risk increases when outputs are shared without verification. Which mitigation strategy best addresses this concern?
a) Human-in-the-loop validation
b) Autonomous deployment without review
c) Larger context windows
d) Reduced logging
07. Bias in AI systems can lead to unfair or harmful outcomes, particularly when models are used in decision-support roles. Organizations must address this risk proactively. Which action most directly helps mitigate bias?
a) Increasing inference speed
b) Auditing training and input data for representation gaps
c) Expanding context window size
d) Disabling model logging
08. When designing prompts for image and audio generation, practitioners often adjust structure differently than for text-only tasks. This is because multimodal outputs require clearer intent signaling.
What is the main reason for this difference?
a) Multimodal models ignore constraints
b) Token limits do not apply to images or audio
c) Few-shot prompting is unsupported
d) Non-text outputs require precise guidance on format and attributes
09. Few-shot prompting is often recommended when working with specialized or domain-specific tasks. This technique relies on providing examples to guide the model’s behavior. Why do examples improve model performance in these scenarios?
a) They demonstrate desired patterns and structure
b) They retrain the model dynamically
c) They increase the model’s context window
d) They reduce token consumption
10. Which use case is best suited for diffusion models rather than LLMs?
a) Sentiment analysis
b) Image synthesis from noise
c) Chat-based question answering
d) Code refactoring