国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

OpenAI-ChatGPT最新官方接口《聊天交互多輪對(duì)話》全網(wǎng)最詳細(xì)中英文實(shí)用指南和教程,助你零基礎(chǔ)快速輕松掌握全新技術(shù)(二)(附源碼)

這篇具有很好參考價(jià)值的文章主要介紹了OpenAI-ChatGPT最新官方接口《聊天交互多輪對(duì)話》全網(wǎng)最詳細(xì)中英文實(shí)用指南和教程,助你零基礎(chǔ)快速輕松掌握全新技術(shù)(二)(附源碼)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

OpenAI-ChatGPT最新官方接口《聊天交互多輪對(duì)話》全網(wǎng)最詳細(xì)中英文實(shí)用指南和教程,助你零基礎(chǔ)快速輕松掌握全新技術(shù)(二)(附源碼)

Chat completions Beta 聊天交互

Using the OpenAI Chat API, you can build your own applications with gpt-3.5-turbo and gpt-4 to do things like:
使用OpenAI Chat API,您可以使用 gpt-3.5-turbogpt-4 構(gòu)建自己的應(yīng)用程序,以執(zhí)行以下操作:

  • Draft an email or other piece of writing
    起草一封電子郵件或其他書面材料
  • Write Python code 編寫Python代碼
  • Answer questions about a set of documents
    回答有關(guān)一組文檔的問題
  • Create conversational agents 創(chuàng)建會(huì)話代理
  • Give your software a natural language interface
    給予你的軟件一個(gè)自然語(yǔ)言界面
  • Tutor in a range of subjects
    一系列科目的導(dǎo)師
  • Translate languages 翻譯語(yǔ)言
  • Simulate characters for video games and much more
    模擬視頻游戲的角色等等

This guide explains how to make an API call for chat-based language models and shares tips for getting good results. You can also experiment with the new chat format in the OpenAI Playground.
本指南解釋了如何為基于聊天的語(yǔ)言模型進(jìn)行API調(diào)用,并分享了獲得良好結(jié)果的提示。您還可以在OpenAI Playground中嘗試新的聊天格式。

前言

ChatGPT的聊天交互是與用戶進(jìn)行交流并為他們提供創(chuàng)新體驗(yàn)的最強(qiáng)大工具。一句話可以概況:“將聊天交互發(fā)揮到極致,定制專屬用戶體驗(yàn),以此解決用戶痛點(diǎn),是最佳創(chuàng)新之道?!彼梢詭椭脩艨焖俣ㄎ粏栴}并獲得正確的答案,提升用戶體驗(yàn)的同時(shí),也能提高工作效率、減少耗時(shí)。此外,ChatGPT還可以根據(jù)用戶特征和需求,提供個(gè)性化的服務(wù),讓使用者在交互中感受到獨(dú)一無二的體驗(yàn)。

Introduction 導(dǎo)言

Chat models take a series of messages as input, and return a model-generated message as output.
聊天模型將一系列消息作為輸入,并返回模型生成的消息作為輸出。

Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversations (such as those previously served by instruction following models like text-davinci-003).
雖然聊天格式旨在使多輪對(duì)話變得容易,但它對(duì)于沒有任何對(duì)話的單輪任務(wù)同樣有用(例如以前由指令跟隨模型(如 text-davinci-003 )提供的任務(wù))。

An example API call looks as follows:
示例API調(diào)用如下所示:

# Note: you need to be using OpenAI Python v0.27.0 for the code below to work 
# 注意:下面的代碼需要使用OpenAI Python v0.27.0才能工作
import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)

The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either “system”, “user”, or “assistant”) and content (the content of the message). Conversations can be as short as 1 message or fill many pages.
主輸入是messages參數(shù)。消息必須是消息對(duì)象的數(shù)組,其中每個(gè)對(duì)象都有一個(gè)角色(“系統(tǒng)”、“用戶”或“助理”)和內(nèi)容(消息的內(nèi)容)。對(duì)話可以短至1條消息或填寫許多頁(yè)面。

Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages.
通常,首先使用系統(tǒng)消息格式化對(duì)話,然后交替使用用戶和助理消息。

The system message helps set the behavior of the assistant. In the example above, the assistant was instructed with “You are a helpful assistant.”
系統(tǒng)消息有助于設(shè)置助手的行為。在上面的示例中,助理被指示“你是一個(gè)有用的助理?!?/p>

gpt-3.5-turbo-0301 does not always pay strong attention to system messages.Future models will be trained to pay stronger attention to system messages.
GPT-3.5-turbo-0301并不總是特別關(guān)注系統(tǒng)消息。未來的模型將被訓(xùn)練為更加關(guān)注系統(tǒng)消息。

The user messages help instruct the assistant. They can be generated by the end users of an application, or set by a developer as an instruction.
用戶消息幫助指示助理。它們可以由應(yīng)用程序的最終用戶生成,也可以由開發(fā)人員設(shè)置為指令。

The assistant messages help store prior responses. They can also be written by a developer to help give examples of desired behavior.
助理消息幫助存儲(chǔ)先前的響應(yīng)。它們也可以由開發(fā)人員編寫,以幫助給予所需行為的示例。

Including the conversation history helps when user instructions refer to prior messages. In the example above, the user’s final question of “Where was it played?” only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied via the conversation. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way.
包括對(duì)話歷史有助于用戶指示引用先前的消息。在上面的示例中,用戶的最終問題“在哪里播放的?“只有在之前關(guān)于2020年世界大賽的信息中才有意義。因?yàn)槟P蜎]有過去請(qǐng)求的記憶,所有相關(guān)信息必須通過會(huì)話提供。如果一個(gè)對(duì)話不能滿足模型的標(biāo)記限制,它將需要以某種方式縮短。

Response format 提示格式

An example API response looks as follows:
API響應(yīng)示例如下所示:

{
 'id': 'chatcmpl-6p9XYPYSTTRi0xEviKjjilqrWU2Ve',
 'object': 'chat.completion',
 'created': 1677649420,
 'model': 'gpt-3.5-turbo',
 'usage': {'prompt_tokens': 56, 'completion_tokens': 31, 'total_tokens': 87},
 'choices': [
   {
    'message': {
      'role': 'assistant',
      'content': 'The 2020 World Series was played in Arlington, Texas at the Globe Life Field, which was the new home stadium for the Texas Rangers.'},
    'finish_reason': 'stop',
    'index': 0
   }
  ]
}

In Python, the assistant’s reply can be extracted with response['choices'][0]['message']['content'].
在Python中,助手的回復(fù)可以用 response['choices'][0]['message']['content'] 提取。

Every response will include a finish_reason. The possible values for finish_reason are:
每個(gè)響應(yīng)都將包含一個(gè) finish_reason 。 finish_reason 的可能值為:

  • stop: API returned complete model output
    stop :API返回完整模型輸出
  • length: Incomplete model output due to max_tokens parameter or token limit
    length :由于 max_tokens 參數(shù)或標(biāo)記限制,模型輸出不完整
  • content_filter: Omitted content due to a flag from our content filters
    content_filter :由于我們的內(nèi)容過濾器中的標(biāo)記而忽略了內(nèi)容
  • null: API response still in progress or incomplete
    null :API響應(yīng)仍在進(jìn)行中或未完成

Managing tokens

Language models read text in chunks called tokens. In English, a token can be as short as one character or as long as one word (e.g., a orapple), and in some languages tokens can be even shorter than one character or even longer than one word.
語(yǔ)言模型以稱為標(biāo)記的塊讀取文本。在英語(yǔ)中,標(biāo)記可以短至一個(gè)字符或長(zhǎng)至一個(gè)單詞(例如, aapple ),并且在一些語(yǔ)言中,標(biāo)記甚至可以比一個(gè)字符更短或者甚至比一個(gè)單詞更長(zhǎng)。

For example, the string "ChatGPT is great!" is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"].
例如,字符串 "ChatGPT is great!" 被編碼為六個(gè)標(biāo)記: ["Chat", "G", "PT", " is", " great", "!"]

The total number of tokens in an API call affects:
API調(diào)用中的標(biāo)記總數(shù)影響:

  • How much your API call costs, as you pay per token
    您的API調(diào)用成本是多少,按每個(gè)標(biāo)記標(biāo)記支付
  • How long your API call takes, as writing more tokens takes more time
    您的API調(diào)用需要多長(zhǎng)時(shí)間,因?yàn)榫帉懜鄻?biāo)記需要更多時(shí)間
  • Whether your API call works at all, as total tokens must be below the model’s maximum limit (4096 tokens for gpt-3.5-turbo-0301)
    您的API調(diào)用是否有效,會(huì)受到標(biāo)記總數(shù)必須低于模型的最大限制( gpt-3.5-turbo-0301 為4096個(gè)標(biāo)記)

Both input and output tokens count toward these quantities. For example, if your API call used 10 tokens in the message input and you received 20 tokens in the message output, you would be billed for 30 tokens.
輸入和輸出標(biāo)記都計(jì)入這些數(shù)量。例如,如果您的API調(diào)用在消息輸入中使用了10個(gè)標(biāo)記,而您在消息輸出中收到了20個(gè)標(biāo)記,則您將收取30個(gè)標(biāo)記的費(fèi)用。

To see how many tokens are used by an API call, check the usage field in the API response (e.g., response['usage']['total_tokens']).
要查看API調(diào)用使用了多少標(biāo)記,請(qǐng)檢查API響應(yīng)中的 usage 字段(例如,response['usage']['total_tokens'])。

Chat models like gpt-3.5-turbo and gpt-4 use tokens in the same way as other models, but because of their message-based formatting, it’s more difficult to count how many tokens will be used by a conversation.
gpt-3.5-turbogpt-4 這樣的聊天模型使用標(biāo)記的方式與其他模型相同,但由于它們基于消息的格式,因此更難以計(jì)算會(huì)話將使用多少標(biāo)記。

Counting tokens for chat API calls 為聊天API調(diào)用標(biāo)記計(jì)數(shù)

Below is an example function for counting tokens for messages passed to gpt-3.5-turbo-0301.
下面是一個(gè)示例函數(shù),用于對(duì)傳遞到gpt-3.5-turbo-0301的消息的標(biāo)記進(jìn)行計(jì)數(shù)。

The exact way that messages are converted into tokens may change from model to model. So when future model versions are released, the answers returned by this function may be only approximate. The ChatML documentation explains how messages are converted into tokens by the OpenAI API, and may be useful for writing your own function.
將消息轉(zhuǎn)換為標(biāo)記的確切方式可能因模型而異。因此,當(dāng)未來的模型版本發(fā)布時(shí),此函數(shù)返回的答案可能只是近似值。ChatML文檔解釋了OpenAI API如何將消息轉(zhuǎn)換為標(biāo)記,并且可能對(duì)編寫您自己的函數(shù)很有用。

def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0301"):
  """Returns the number of tokens used by a list of messages."""
  try:
      encoding = tiktoken.encoding_for_model(model)
  except KeyError:
      encoding = tiktoken.get_encoding("cl100k_base")
  if model == "gpt-3.5-turbo-0301":  # note: future models may deviate from this
      num_tokens = 0
      for message in messages:
          num_tokens += 4  # every message follows <im_start>{role/name}\n{content}<im_end>\n
          for key, value in message.items():
              num_tokens += len(encoding.encode(value))
              if key == "name":  # if there's a name, the role is omitted
                  num_tokens += -1  # role is always required and always 1 token
      num_tokens += 2  # every reply is primed with <im_start>assistant
      return num_tokens
  else:
      raise NotImplementedError(f"""num_tokens_from_messages() is not presently implemented for model {model}.
  See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")

Next, create a message and pass it to the function defined above to see the token count, this should match the value returned by the API usage parameter:
接下來,創(chuàng)建一條消息并將其傳遞給上面定義的函數(shù)以查看標(biāo)記計(jì)數(shù),這應(yīng)該與API使用參數(shù)返回的值匹配:

messages = [
  {"role": "system", "content": "You are a helpful, pattern-following assistant that translates corporate jargon into plain English."},
  {"role": "system", "name":"example_user", "content": "New synergies will help drive top-line growth."},
  {"role": "system", "name": "example_assistant", "content": "Things working well together will increase revenue."},
  {"role": "system", "name":"example_user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
  {"role": "system", "name": "example_assistant", "content": "Let's talk later when we're less busy about how to do better."},
  {"role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable."},
]

model = "gpt-3.5-turbo-0301"

print(f"{num_tokens_from_messages(messages, model)} prompt tokens counted.")
# Should show ~126 total_tokens

To confirm the number generated by our function above is the same as what the API returns, create a new Chat Completion:
要確認(rèn)我們上面的函數(shù)生成的數(shù)字與API返回的數(shù)字相同,請(qǐng)創(chuàng)建一個(gè)新的聊天完成:

# example token count from the OpenAI API
import openai

response = openai.ChatCompletion.create(
    model=model,
    messages=messages,
    temperature=0,
)

print(f'{response["usage"]["prompt_tokens"]} prompt tokens used.')

To see how many tokens are in a text string without making an API call, use OpenAI’s tiktoken Python library. Example code can be found in the OpenAI Cookbook’s guide on how to count tokens with tiktoken.
要在不進(jìn)行API調(diào)用的情況下查看文本字符串中有多少標(biāo)記,請(qǐng)使用OpenAI的tiktoken Python庫(kù)。示例代碼可以在OpenAI Cookbook關(guān)于如何使用tiktoken進(jìn)行標(biāo)記計(jì)數(shù)的指南中找到。

Each message passed to the API consumes the number of tokens in the content, role, and other fields, plus a few extra for behind-the-scenes formatting. This may change slightly in the future.
傳遞給API的每條消息都會(huì)消耗內(nèi)容、角色和其他字段中的標(biāo)記數(shù)量,外加一些用于幕后格式化的額外標(biāo)記。這在未來可能會(huì)略有變化。

If a conversation has too many tokens to fit within a model’s maximum limit (e.g., more than 4096 tokens for gpt-3.5-turbo), you will have to truncate, omit, or otherwise shrink your text until it fits. Beware that if a message is removed from the messages input, the model will lose all knowledge of it.
如果對(duì)話具有太多標(biāo)記而不能適應(yīng)模型的最大限制(例如,超過4096個(gè)標(biāo)記(對(duì)于 gpt-3.5-turbo ),您將不得不截?cái)?、省略或以其他方式縮小文本,直到它適合為止。請(qǐng)注意,如果從消息輸入中刪除了一條消息,則模型將丟失所有關(guān)于它的知識(shí)。

Note too that very long conversations are more likely to receive incomplete replies. For example, a gpt-3.5-turboconversation that is 4090 tokens long will have its reply cut off after just 6 tokens.
也要注意,很長(zhǎng)的對(duì)話更有可能收到不完整的回復(fù)。例如,長(zhǎng)度為4090個(gè)標(biāo)記的 gpt-3.5-turbo會(huì)話將在僅6個(gè)標(biāo)記之后切斷其回復(fù)。

Instructing chat models 指導(dǎo)聊天模型

Best practices for instructing models may change from model version to version. The advice that follows applies to gpt-3.5-turbo-0301 and may not apply to future models.
指導(dǎo)模型的最佳實(shí)踐可能因模型版本而異。以下建議適用于gpt-3. 5-turbo-0301,可能不適用于未來的模型。

Many conversations begin with a system message to gently instruct the assistant. For example, here is one of the system messages used for ChatGPT:
許多對(duì)話以系統(tǒng)消息開始,以溫和地指示助理。例如,以下是用于ChatGPT的系統(tǒng)消息之一:

You are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date}
你是ChatGPT,一個(gè)由OpenAI訓(xùn)練的大型語(yǔ)言模型。盡可能簡(jiǎn)明扼要地回答。知識(shí)截止:{knowledge_cutoff} 當(dāng)前日期: {current_date}

In general, gpt-3.5-turbo-0301 does not pay strong attention to the system message, and therefore important instructions are often better placed in a user message.
一般來說, gpt-3.5-turbo-0301 不太關(guān)注系統(tǒng)消息,因此重要的指令通常最好放在用戶消息中。

If the model isn’t generating the output you want, feel free to iterate and experiment with potential improvements. You can try approaches like:
如果模型沒有生成您想要的輸出,請(qǐng)隨意迭代并嘗試潛在的改進(jìn)。您可以嘗試以下方法:

  • Make your instruction more explicit
    讓你的指示更明確
  • Specify the format you want the answer in
    指定您希望答案采用的格式
  • Ask the model to think step by step or debate pros and cons before settling on an answer
    讓模型一步一步地思考,或者在確定答案之前討論利弊

For more prompt engineering ideas, read the OpenAI Cookbook guide on techniques to improve reliability.
有關(guān)更多快速工程想法,請(qǐng)閱讀OpenAI Cookbook關(guān)于提高可靠性的技術(shù)指南。

Beyond the system message, the temperature and max tokens are two of many options developers have to influence the output of the chat models. For temperature, higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. In the case of max tokens, if you want to limit a response to a certain length, max tokens can be set to an arbitrary number. This may cause issues for example if you set the max tokens value to 5 since the output will be cut-off and the result will not make sense to users.
除了系統(tǒng)消息之外, temperaturemax tokens是開發(fā)人員必須影響聊天模型輸出的許多選項(xiàng)中的兩個(gè)。對(duì)于temperature,較高的值(如0.8)將使輸出更加隨機(jī),而較低的值(如0.2)將使其更加集中和確定。在max tokens的情況下,如果您希望將響應(yīng)限制為特定長(zhǎng)度,則可以將max tokens設(shè)置為任意數(shù)字。這可能會(huì)導(dǎo)致問題,例如,如果您將最大標(biāo)記值設(shè)置為5,因?yàn)檩敵鰧⒈磺袛?,結(jié)果對(duì)用戶沒有意義。

Chat vs Completions 聊天與完成

Because gpt-3.5-turbo performs at a similar capability to text-davinci-003 but at 10% the price per token, we recommend gpt-3.5-turbo for most use cases.
由于 gpt-3.5-turbo 的性能與 text-davinci-003 相似,但每個(gè)標(biāo)記的價(jià)格為10%,因此我們建議在大多數(shù)用例中使用 gpt-3.5-turbo

For many developers, the transition is as simple as rewriting and retesting a prompt.
對(duì)于許多開發(fā)人員來說,轉(zhuǎn)換就像重寫和重新測(cè)試提示一樣簡(jiǎn)單。

For example, if you translated English to French with the following completions prompt:
例如,如果您將英語(yǔ)翻譯為法語(yǔ),并出現(xiàn)以下完成提示:

Translate the following English text to French: "{text}"

An equivalent chat conversation could look like:
類似的聊天對(duì)話可能如下所示:

[
  {"role": "system", "content": "You are a helpful assistant that translates English to French."},
  {"role": "user", "content": 'Translate the following English text to French: "{text}"'}
]

Or even just the user message:
或者只是用戶消息:

[
  {"role": "user", "content": 'Translate the following English text to French: "{text}"'}
]

FAQ 問與答

Is fine-tuning available for gpt-3.5-turbo?
gpt-3.5-turbo 是否可進(jìn)行微調(diào)?

No. As of Mar 1, 2023, you can only fine-tune base GPT-3 models. See the fine-tuning guide for more details on how to use fine-tuned models.
不可以。自2023年3月1日起,您只能微調(diào)基本GPT-3模型。有關(guān)如何使用微調(diào)模型的更多詳細(xì)信息,請(qǐng)參閱微調(diào)指南。

Do you store the data that is passed into the API?
您是否存儲(chǔ)傳遞到API中的數(shù)據(jù)?

As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy.
自2023年3月1日起,我們將保留您的API數(shù)據(jù)30天,但不再使用您通過API發(fā)送的數(shù)據(jù)來改進(jìn)我們的模型。在我們的數(shù)據(jù)使用政策中了解更多信息。

Adding a moderation layer 添加審核層
If you want to add a moderation layer to the outputs of the Chat API, you can follow our moderation guide to prevent content that violates OpenAI’s usage policies from being shown.
如果您想在Chat API的輸出中添加審核層,您可以遵循我們的審核指南,以防止顯示違反OpenAI使用策略的內(nèi)容。

其它資料下載

如果大家想繼續(xù)了解人工智能相關(guān)學(xué)習(xí)路線和知識(shí)體系,歡迎大家翻閱我的另外一篇博客《重磅 | 完備的人工智能AI 學(xué)習(xí)——基礎(chǔ)知識(shí)學(xué)習(xí)路線,所有資料免關(guān)注免套路直接網(wǎng)盤下載》
這篇博客參考了Github知名開源平臺(tái),AI技術(shù)平臺(tái)以及相關(guān)領(lǐng)域?qū)<遥篋atawhale,ApacheCN,AI有道和黃海廣博士等約有近100G相關(guān)資料,希望能幫助到所有小伙伴們。文章來源地址http://www.zghlxwxcb.cn/news/detail-423267.html

到了這里,關(guān)于OpenAI-ChatGPT最新官方接口《聊天交互多輪對(duì)話》全網(wǎng)最詳細(xì)中英文實(shí)用指南和教程,助你零基礎(chǔ)快速輕松掌握全新技術(shù)(二)(附源碼)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包