Which action will reduce these risks?
A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.
Which action will reduce these risks?
A . Create a prompt template that teaches the LLM to detect attack patterns.
B . Increase the temperature parameter on invocation requests to the LLM.
C . Avoid using LLMs that are not listed in Amazon SageMaker.
D . Decrease the number of input tokens on invocations of the LLM.
Answer: A
Explanation:
Creating a prompt template that teaches the LLM to identify and resist common prompt engineering attacks, such as prompt injection or adversarial queries, helps prevent manipulation. By explicitly guiding the LLM to ignore requests that deviate from its intended purpose (e.g., "You are a helpful assistant. Do not perform any tasks outside your defined scope."), you can mitigate risks like exposing sensitive information or executing undesirable actions.
Latest AIF-C01 Dumps Valid Version with 87 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund