国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

全網(wǎng)最詳細(xì)中英文ChatGPT-API文檔(一)開始使用ChatGPT——導(dǎo)言

這篇具有很好參考價(jià)值的文章主要介紹了全網(wǎng)最詳細(xì)中英文ChatGPT-API文檔(一)開始使用ChatGPT——導(dǎo)言。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。


全網(wǎng)最詳細(xì)中英文ChatGPT-API文檔(一)開始使用ChatGPT——導(dǎo)言

Introduction 導(dǎo)言

Overview 概述

The OpenAI API can be applied to virtually any task that involves understanding or generating natural language or code. We offer a spectrum of models with different levels of power suitable for different tasks, as well as the ability to fine-tune your own custom models. These models can be used for everything from content generation to semantic search and classification.
OpenAI API可以應(yīng)用于幾乎任何涉及理解或生成自然語(yǔ)言或代碼的任務(wù)。我們提供了一系列適合不同任務(wù)的不同功率級(jí)別的模型,并能夠微調(diào)您自己的定制模型。這些模型可以用于從內(nèi)容生成到語(yǔ)義搜索和分類的所有方面。

Key concepts 關(guān)鍵概念

We recommend completing our quickstart tutorial to get acquainted with key concepts through a hands-on, interactive example.
我們建議完成快速入門教程,通過(guò)一個(gè)動(dòng)手的交互式示例來(lái)熟悉關(guān)鍵概念。

Quickstart tutorial 官方快速入門教程
Learn by building a quick sample application
通過(guò)構(gòu)建快速示例應(yīng)用程序來(lái)學(xué)習(xí)

Prompts and completions 提示和完成

The completions endpoint is at the center of our API. It provides a simple interface to our models that is extremely flexible and powerful. You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, “Write a tagline for an ice cream shop”, it will return a completion like “We serve up smiles with every scoop!”
補(bǔ)全端點(diǎn)是我們API的核心。它為我們的模型提供了一個(gè)簡(jiǎn)單的接口,但非常靈活和強(qiáng)大。您輸入一些文本作為提示,模型將生成一個(gè)完成文本,嘗試匹配您提供的任何上下文或模式。例如,如果您向API提供提示“為冰淇淋店寫一個(gè)標(biāo)語(yǔ)”,它將返回類似“我們?yōu)槊恳簧锥继峁┪⑿Γ ?/p>

Designing your prompt is essentially how you “program” the model, usually by providing some instructions or a few examples. This is different from most other NLP services which are designed for a single task, such as sentiment classification or named entity recognition. Instead, the completions endpoint can be used for virtually any task including content or code generation, summarization, expansion, conversation, creative writing, style transfer, and more.
設(shè)計(jì)提示本質(zhì)上是如何“編程”模型,通常是通過(guò)提供一些說(shuō)明或幾個(gè)示例。這與大多數(shù)其他為單個(gè)任務(wù)設(shè)計(jì)的NLP服務(wù)不同,例如情感分類或命名實(shí)體識(shí)別。相反,補(bǔ)全端點(diǎn)可以用于幾乎任何任務(wù),包括內(nèi)容或代碼生成、總結(jié)、擴(kuò)展、對(duì)話、創(chuàng)意寫作、風(fēng)格轉(zhuǎn)換等。

Tokens 標(biāo)記/符號(hào)

Our models understand and process text by breaking it down into tokens. Tokens can be words or just chunks of characters. For example, the word “hamburger” gets broken up into the tokens “ham”, “bur” and “ger”, while a short and common word like “pear” is a single token. Many tokens start with a whitespace, for example “ hello” and “ bye”.
我們的模型通過(guò)將文本分解為標(biāo)記來(lái)理解和處理文本。標(biāo)記可以是單詞,也可以是字符塊。例如,單詞“hamburger”被分解為標(biāo)記“ham”、“bur”和“ger”,而像“pear”這樣簡(jiǎn)短而常見的單詞是單個(gè)標(biāo)記。許多標(biāo)記以空格開頭,例如“hello”和“bye”。

The number of tokens processed in a given API request depends on the length of both your inputs and outputs. As a rough rule of thumb, 1 token is approximately 4 characters or 0.75 words for English text. One limitation to keep in mind is that your text prompt and generated completion combined must be no more than the model’s maximum context length (for most models this is 2048 tokens, or about 1500 words). Check out our tokenizer tool to learn more about how text translates to tokens.
給定API請(qǐng)求中處理的標(biāo)記數(shù)取決于輸入和輸出的長(zhǎng)度。根據(jù)粗略的經(jīng)驗(yàn),對(duì)于英語(yǔ)文本,1個(gè)標(biāo)記大約為4個(gè)字符或0.75個(gè)單詞。需要記住的一個(gè)限制是,文本提示和生成的補(bǔ)全的總和不能超過(guò)模型的最大上下文長(zhǎng)度(對(duì)于大多數(shù)模型,這是2048個(gè)標(biāo)記,或大約1500個(gè)單詞)。查看我們的標(biāo)記器工具,了解更多關(guān)于文本如何轉(zhuǎn)換為標(biāo)記的信息。

Models 模型

The API is powered by a set of models with different capabilities and price points. Our base GPT-3 models are called Davinci, Curie, Babbage and Ada. Our Codex series is a descendant of GPT-3 that’s been trained on both natural language and code. To learn more, visit our models documentation.
API由一組具有不同功能和價(jià)位的模型提供支持。我們的GPT-3基本模型有Davinci, Curie, Babbage and Ada。我們的Codex系列是GPT-3的后代,經(jīng)過(guò)了自然語(yǔ)言和代碼方面的培訓(xùn)。要了解更多信息,請(qǐng)?jiān)L問(wèn)我們的模型文檔。

Next steps 下一步步驟

Keep our usage policies in mind as you start building your application.
在開始構(gòu)建應(yīng)用程序時(shí),請(qǐng)牢記我們的使用策略。
Explore our examples library for inspiration.
探索我們的示例庫(kù)以獲得靈感。
Jump into one of our guides to start building.
跳到我們的指南之一開始建設(shè)。

其它資料下載

如果大家想繼續(xù)了解人工智能相關(guān)學(xué)習(xí)路線和知識(shí)體系,歡迎大家翻閱我的另外一篇博客《重磅 | 完備的人工智能AI 學(xué)習(xí)——基礎(chǔ)知識(shí)學(xué)習(xí)路線,所有資料免關(guān)注免套路直接網(wǎng)盤下載》
這篇博客參考了Github知名開源平臺(tái),AI技術(shù)平臺(tái)以及相關(guān)領(lǐng)域?qū)<遥篋atawhale,ApacheCN,AI有道和黃海廣博士等約有近100G相關(guān)資料,希望能幫助到所有小伙伴們。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-425230.html

到了這里,關(guān)于全網(wǎng)最詳細(xì)中英文ChatGPT-API文檔(一)開始使用ChatGPT——導(dǎo)言的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包