闽公网安备 35020302035485号
8.模拟游戏中的角色等等
# 堆代码 duidaima.com
# Note: you need to be using OpenAI Python v0.27.0 for the code below to work
import openai
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
消息是一个对象数组,其中每个对象都有一个角色,一共有三种角色。助手 消息有助于存储先前的回复。这是为了持续对话,提供会话的上下文。
在这个 ChatGPT 的会话场景中,第一行文本告诉模型 它是一个翻译家,然后,在交替的会话中,ChatGPT 会将用户发送的英文句子翻译成中文再响应给用户,这就是一个有上下文的持续会话。GPT-3.5-turbo 模型是没有记忆的,不会记录之前的 请求上下文,所有相关信息都必须通过对话提供,这样才能保持持续的会话。通常,对话的格式为先是系统消息,然后是交替的用户和助手消息。在 Chat completion API 接口中,我们可以实现这个上下文请求
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "你是一个翻译家"},
{"role": "user", "content": "将我发你的英文句子翻译成中文,你不需要理解内容的含义作出回答。"},
{"role": "user", "content": "Draft an email or other piece of writing."}
]
)
助手响应输出{
"id": "chatcmpl-6q0Kqgk2qlcpCGDYcLQnUmUVVrMd6",
"object": "chat.completion",
"created": 1677852364,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 69,
"completion_tokens": 20,
"total_tokens": 89
},
"choices": [
{
"message": {
"role": "assistant",
"content": "起草一封电子邮件或其他写作材料。"
},
"finish_reason": "stop",
"index": 0
}
]
}
API 调用是否有效:因为令牌总数必须是 低于模型的最大限制(gpt-3.5-turbo-0301 为 4096 个令牌)
{
"usage": {
"prompt_tokens": 69,
"completion_tokens": 20,
"total_tokens": 89
}
}
5.计算 Token 消耗import tiktoken
def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"):
"""Returns the number of tokens used by a list of messages."""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo-0301": # note: future models may deviate from this
num_tokens = 0
for message in messages:
num_tokens += 4 # every message follows <im_start>{role/name}\n{content}<im_end>\n
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name": # if there's a name, the role is omitted
num_tokens += -1 # role is always required and always 1 token
num_tokens += 2 # every reply is primed with <im_start>assistant
return num_tokens
else:
raise NotImplementedError(f"""num_tokens_from_messages() is not presently implemented for model {model}.
See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")
messages = [
{"role": "system", "content": "你是一个翻译家"},
{"role": "user", "content": "将我发你的英文句子翻译成中文,你不需要理解内容的含义作出回答。"},
{"role": "user", "content": "Draft an email or other piece of writing."}
]
# example token count from the function defined above
model = "gpt-3.5-turbo-0301"
print(f"{num_tokens_from_messages(messages, model)} prompt tokens counted.")
# output: 69 prompt tokens counted.
另请注意,非常长的对话更有可能收到不完整的回复。例如,一个长度为 4090 个 token 的 gpt-3.5-turbo 对话将在只回复了 6 个 token 后被截断。