최신 1Z0-1127-25 무료덤프 - Oracle Cloud Infrastructure 2025 Generative AI Professional
Given the following code block:
history = StreamlitChatMessageHistory(key="chat_messages")
memory = ConversationBufferMemory(chat_memory=history)
Which statement is NOT true about StreamlitChatMessageHistory?
history = StreamlitChatMessageHistory(key="chat_messages")
memory = ConversationBufferMemory(chat_memory=history)
Which statement is NOT true about StreamlitChatMessageHistory?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Given the following code:
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
chain = prompt | llm
Which statement is true about LangChain Expression Language (LCEL)?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
How are documents usually evaluated in the simplest form of keyword-based search?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
What do prompt templates use for templating in language model applications?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
Given the following code:
PromptTemplate(input_variables=["human_input", "city"], template=template) Which statement is true about PromptTemplate in relation to input_variables?
PromptTemplate(input_variables=["human_input", "city"], template=template) Which statement is true about PromptTemplate in relation to input_variables?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)