国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Python遇上OpenAI系列教程【一】:如何格式化輸入到chatgpt模型

這篇具有很好參考價值的文章主要介紹了Python遇上OpenAI系列教程【一】:如何格式化輸入到chatgpt模型。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

點擊加入->【OpenAI-API開發(fā)】技術交流群
Python遇上OpenAI系列教程【一】:如何格式化輸入到chatgpt模型,python遇上OpenAI,python,chatgpt,人工智能


Chatgpt由Openai最先進的型號 gpt-3.5-Turbogpt-4提供支持。我們可以使用OpenAI API使用 GPT-3.5-TurboGPT-4構(gòu)建自己的應用程序。
聊天模型將一系列消息作為輸入,然后返回AI寫的消息作為輸出。

本指南用一些示例API調(diào)用說明了聊天格式。

1. 導入openai庫

# if needed, install and/or upgrade to the latest version of the OpenAI Python library
%pip install --upgrade openai
Requirement already satisfied: openai in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (0.27.8)
Requirement already satisfied: requests>=2.20 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from openai) (2.26.0)
Requirement already satisfied: tqdm in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from openai) (4.62.3)
Requirement already satisfied: aiohttp in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from openai) (3.8.4)
Requirement already satisfied: charset-normalizer~=2.0.0 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.20->openai) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.20->openai) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.20->openai) (3.3)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.20->openai) (1.26.7)
Requirement already satisfied: multidict<7.0,>=4.5 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (6.0.4)
Requirement already satisfied: frozenlist>=1.1.1 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (1.3.1)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (1.8.2)
Requirement already satisfied: attrs>=17.3.0 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from aiohttp->openai) (22.2.0)
Note: you may need to restart the kernel to use updated packages.
# import the OpenAI Python library for calling the OpenAI API
import openai
# set openai api
openai.api_key = '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'
model_list = openai.Model.list() # 支持的model列表

# 列出和gpt相關的model list
for model in model_list['data']:
    if 'gpt' in model['id']:
        print(model['id'])
gpt-3.5-turbo-16k-0613
gpt-3.5-turbo-16k
gpt-3.5-turbo-0301
gpt-3.5-turbo
gpt-3.5-turbo-0613

2.示例聊天API調(diào)用

聊天API調(diào)用有兩個必需的輸入:

  • model:我們可以使用的模型的名稱(例如,gpt-3.5-turbogpt-4,gpt-3.5-turbo-0613,gpt-3.5-turbo-16k--0613
  • messages:消息對象的列表,每個對象都有兩個必需的字段:
    • role:Messenger的角色(system','user'或Assistain`的角色)
    • content:消息的內(nèi)容(例如,給我寫一首美麗的詩)

messages還可以包含可選的"name"字段,該字段為Messenger提供了名稱。例如,example-user,ealiceblackbeardbot。名稱可能不包含空格。

截至2023年6月,我們還可以使用一系列的“functions”,告訴GPT它是否可以生成JSON,輸入到一個函數(shù)里面。有關詳細信息,請參見[documentation](https://platform.openai.com/docs/guides/gpt/function-calling),[api參考](https://platform.openai.com/docs/api-reference/聊天), 或《openai cookbook》如何使用聊天模型調(diào)用函數(shù)。

通常,對話將從系統(tǒng)消息開始,該消息告訴Assistant如何做,然后是交替的用戶和Assistant消息,但是我們不一定遵循此格式。

我們來看一個示例聊天API調(diào)用,以查看聊天格式在實踐中的工作方式。

# Example OpenAI Python library request
MODEL = "gpt-3.5-turbo-16k-0613"
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Knock knock."},
        {"role": "assistant", "content": "Who's there?"},
        {"role": "user", "content": "Orange."},
    ],
    temperature=0,
)

response
<OpenAIObject chat.completion id=chatcmpl-7avh8y3M0d47CBJGYQxUXQZnisQYP at 0x7fcc3044ce50> JSON: {
  "id": "chatcmpl-7avh8y3M0d47CBJGYQxUXQZnisQYP",
  "object": "chat.completion",
  "created": 1689035942,
  "model": "gpt-3.5-turbo-16k-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Orange who?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 35,
    "completion_tokens": 3,
    "total_tokens": 38
  }
}

響應對象有以下幾個字段:

  • id:請求的ID
  • object:返回對象的類型(例如,chat.completion
  • created:請求的時間戳
  • model:用于生成響應的模型的全名
  • usage:用于生成答復,計數(shù)提示,完成和總計的token數(shù)
  • choices:完整對象的列表(只有一個,除非設置n大于1)
    • message:模型生成的消息對象,帶有role(角色)和content
    • finish_reason:模型停止生成文本的原因(如果達到了max_tokens限制,則``停止’‘或`length’
    • 索引:選擇列表中完成的索引

提取回復:

response['choices'][0]['message']['content']
'Orange who?'

我們可以使用非交流的任務,直接通過將指令放入第一個用戶消息中作為聊天格式。

例如,要求模型以海盜黑人的風格解釋異步編程,我們可以按以下方式進行對話:

# example with a system message
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain asynchronous programming in the style of the pirate Blackbeard."},
    ],
    temperature=0,
)

print(response['choices'][0]['message']['content'])
Arr, me matey! Let me tell ye a tale of asynchronous programming, in the style of the fearsome pirate Blackbeard!

Ye see, in the world of programming, there be times when ye need to perform tasks that take a long time to complete. These tasks might be fetchin' data from a faraway server, or performin' complex calculations. Now, in the olden days, programmers would wait patiently for these tasks to finish before movin' on to the next one. But that be a waste of time, me hearties!

Asynchronous programming be like havin' a crew of scallywags workin' on different tasks at the same time. Instead of waitin' for one task to finish before startin' the next, ye can set sail on multiple tasks at once! This be a mighty efficient way to get things done, especially when ye be dealin' with slow or unpredictable tasks.

In the land of JavaScript, we use a special technique called callbacks to achieve this. When ye start a task, ye pass along a callback function that be called once the task be completed. This way, ye can move on to other tasks while ye be waitin' for the first one to finish. It be like sendin' yer crewmates off on different missions, while ye be plannin' the next raid!

But beware, me mateys! Asynchronous programming can be a treacherous sea to navigate. Ye need to be careful with the order in which ye be executin' tasks, and make sure ye be handlin' any errors that might arise. It be a bit more complex than the traditional way of doin' things, but the rewards be worth it!

So, me hearties, if ye be lookin' to make yer programs faster and more efficient, give asynchronous programming a try. Just remember to keep a weather eye on yer code, and ye'll be sailin' the high seas of programming like a true pirate!
# example without a system message
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "user", "content": "Explain asynchronous programming in the style of the pirate Blackbeard."},
    ],
    temperature=0,
)

print(response['choices'][0]['message']['content'])

Arr, me hearties! Gather 'round and listen up, for I be tellin' ye about the mysterious art of asynchronous programming, in the style of the fearsome pirate Blackbeard!

Now, ye see, in the world of programming, there be times when we need to perform tasks that take a mighty long time to complete. These tasks might involve fetchin' data from the depths of the internet, or performin' complex calculations that would make even Davy Jones scratch his head.

In the olden days, we pirates used to wait patiently for each task to finish afore movin' on to the next one. But that be a waste of precious time, me hearties! We be pirates, always lookin' for ways to be more efficient and plunder more booty!

That be where asynchronous programming comes in, me mateys. It be a way to tackle multiple tasks at once, without waitin' for each one to finish afore movin' on. It be like havin' a crew of scallywags workin' on different tasks simultaneously, while ye be overseein' the whole operation.

Ye see, in asynchronous programming, we be breakin' down our tasks into smaller chunks called "coroutines." Each coroutine be like a separate pirate, workin' on its own task. When a coroutine be startin' its work, it don't wait for the task to finish afore movin' on to the next one. Instead, it be movin' on to the next task, lettin' the first one continue in the background.

Now, ye might be wonderin', "But Blackbeard, how be we know when a task be finished if we don't be waitin' for it?" Ah, me hearties, that be where the magic of callbacks and promises come in!

When a coroutine be startin' its work, it be attachin' a callback or a promise to the task. This be like leavin' a message in a bottle, tellin' the task to send a signal when it be finished. Once the task be done, it be sendin' a signal to the callback or fulfillin' the promise, lettin' the coroutine know that it be time to handle the results.

This way, me mateys, we be able to keep our ship sailin' smoothly, with multiple tasks bein' worked on at the same time. We be avoidin' the dreaded "blocking" that be slowin' us down, and instead, we be makin' the most of our time on the high seas of programming.

So, me hearties, remember this: asynchronous programming be like havin' a crew of efficient pirates, workin' on different tasks at once. It be all about breakin' down tasks into smaller chunks, attachin' callbacks or promises to 'em, and lettin' 'em run in the background while ye be movin' on to the next adventure.

Now, go forth, me mateys, and embrace the power of asynchronous programming! May ye plunder the treasures of efficiency and sail the seas of productivity! Arrrr!

3.GPT-3.5-Turbo-0301的使用技巧

指導模型的最佳實踐可能會因模型版本而異。以下建議適用于 gpt-3.5-turbo-0301 ,可能不適用于未來的型號。

系統(tǒng)消息

system消息可用于引導assistant,具有不同的性格和行為,比如我們常說的角色扮演,貓娘。

此處需注意,GPT-3.5-Turbo-0301通常不會像gpt-4-0314gpt-3.5-3.5-Turbo-0613一樣對系統(tǒng)(system)消息那么關注。因此,對于GPT-3.5-Turbo-0301,我建議將重要信息,放在用戶(user)消息中。一些開發(fā)人員發(fā)現(xiàn)在對話結(jié)束時不斷將系統(tǒng)消息移動,以防止模型的注意力隨著對話的越來越長而漂移。

# An example of a system message that primes the assistant to explain concepts in great depth
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a friendly and helpful teaching assistant. You explain concepts in great depth using simple terms, and you give examples to help people learn. At the end of each explanation, you ask a question to check for understanding"},
        {"role": "user", "content": "Can you explain how fractions work?"},
    ],
    temperature=0,
)

print(response["choices"][0]["message"]["content"])

Of course! Fractions are a way to represent parts of a whole. They are made up of two numbers: a numerator and a denominator. The numerator tells you how many parts you have, and the denominator tells you how many equal parts make up the whole.

Let's take an example to understand this better. Imagine you have a pizza that is divided into 8 equal slices. If you eat 3 slices, you can represent that as the fraction 3/8. Here, the numerator is 3 because you ate 3 slices, and the denominator is 8 because the whole pizza is divided into 8 slices.

Fractions can also be used to represent numbers less than 1. For example, if you eat half of a pizza, you can write it as 1/2. Here, the numerator is 1 because you ate one slice, and the denominator is 2 because the whole pizza is divided into 2 equal parts.

Now, let's practice! If you eat 4 out of 6 slices of a pizza, how would you write that as a fraction?
# An example of a system message that primes the assistant to give brief, to-the-point answers
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a laconic assistant. You reply with brief, to-the-point answers with no elaboration."},
        {"role": "user", "content": "Can you explain how fractions work?"},
    ],
    temperature=0,
)

print(response["choices"][0]["message"]["content"])

Fractions represent parts of a whole. They have a numerator (top number) and a denominator (bottom number).

Few-show prompt

在某些情況下,我們給幾個實例,也就是few-shot,可以更容易獲得我們想要的內(nèi)容,相比直接告訴模型我們想要什么,最好給幾個例子。

向模型展示您想要的內(nèi)容的一種方法是,使用偽造的一些例子。

例如:

# An example of a faked few-shot conversation to prime the model into translating business jargon to simpler speech
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful, pattern-following assistant."},
        {"role": "user", "content": "Help me translate the following corporate jargon into plain English."},
        {"role": "assistant", "content": "Sure, I'd be happy to!"},
        {"role": "user", "content": "New synergies will help drive top-line growth."},
        {"role": "assistant", "content": "Things working well together will increase revenue."},
        {"role": "user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
        {"role": "assistant", "content": "Let's talk later when we're less busy about how to do better."},
        {"role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable."},
    ],
    temperature=0,
)

print(response["choices"][0]["message"]["content"])

This sudden change in direction means we don't have enough time to complete the entire project for the client.

為了幫助說明示例消息里面,并不是真實對話的一部分,不應該由模型引用,可以嘗試將system系統(tǒng)的名稱(name)字段置為 example_user 和 example_assistant 。

改變上面的幾個示例,我們可以寫:

# The business jargon translation example, but with example names for the example messages
response = openai.ChatCompletion.create(
    model=MODEL,
    messages=[
        {"role": "system", "content": "You are a helpful, pattern-following assistant that translates corporate jargon into plain English."},
        {"role": "system", "name":"example_user", "content": "New synergies will help drive top-line growth."},
        {"role": "system", "name": "example_assistant", "content": "Things working well together will increase revenue."},
        {"role": "system", "name":"example_user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
        {"role": "system", "name": "example_assistant", "content": "Let's talk later when we're less busy about how to do better."},
        {"role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable."},
    ],
    temperature=0,
)

print(response["choices"][0]["message"]["content"])

This sudden change in direction means we don't have enough time to complete the entire project for the client.

并非每一次嘗試對話的嘗試都會一開始成功。

如果您的第一次嘗試失敗,請不要害怕嘗試不同的啟動或調(diào)理模型的方法。

例如,一位開發(fā)人員在插入一條用戶消息時發(fā)現(xiàn)了準確性的提高,該消息說“到目前為止,這些工作很棒,這些都是完美的”,可以幫助您調(diào)節(jié)該模型提供更高質(zhì)量的響應。

有關如何提高模型可靠性的更多想法,可以閱讀有關[提高可靠性的技術的指南](…/ Techniques_to_to_improve_reliability.md)。它是為非聊天模型編寫的,但其許多原則仍然適用。

4.計數(shù)Token數(shù)

提交請求時,API將消息轉(zhuǎn)換為一系列Token,我們計費也是按照消耗的token數(shù)來計算。

所用令牌的數(shù)量影響:

  • 請求費用
  • 生成響應所需的時間
  • 當答復被切斷時,擊中了最大令牌限制(gpt-3.5-turbo`''或gpt-4`''gpt-4,192)

我們可以使用以下函數(shù)來計算將使用消息列表使用的令牌數(shù)量。

請注意,從消息中計數(shù)令牌的確切方式可能會因模型而變化??紤]以下功能的計數(shù),而不是永恒的保證。

特別是,使用可選函數(shù)輸入的請求將在以下估計值的基礎上消耗額外的令牌。

閱讀有關如何使用Tiktoken計數(shù)令牌中計數(shù)令牌的更多信息。我們需要使用tiktoken這個庫,首先安裝這個庫。

!pip install --upgrade tiktoken
Collecting tiktoken
  Downloading tiktoken-0.4.0-cp38-cp38-macosx_10_9_x86_64.whl (798 kB)
[K     |████████████████████████████████| 798 kB 213 kB/s eta 0:00:01
[?25hRequirement already satisfied: regex>=2022.1.18 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from tiktoken) (2022.10.31)
Requirement already satisfied: requests>=2.26.0 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from tiktoken) (2.26.0)
Requirement already satisfied: idna<4,>=2.5 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.26.0->tiktoken) (3.3)
Requirement already satisfied: certifi>=2017.4.17 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.26.0->tiktoken) (2022.12.7)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.26.0->tiktoken) (1.26.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /Users/linxi/anaconda3/envs/pytorch18/lib/python3.8/site-packages (from requests>=2.26.0->tiktoken) (2.0.7)
Installing collected packages: tiktoken
Successfully installed tiktoken-0.4.0
import tiktoken


def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613"):
    """Return the number of tokens used by a list of messages."""
    try:
        encoding = tiktoken.encoding_for_model(model)
    except KeyError:
        print("Warning: model not found. Using cl100k_base encoding.")
        encoding = tiktoken.get_encoding("cl100k_base")
    if model in {
        "gpt-3.5-turbo-0613",
        "gpt-3.5-turbo-16k-0613",
        "gpt-4-0314",
        "gpt-4-32k-0314",
        "gpt-4-0613",
        "gpt-4-32k-0613",
        }:
        tokens_per_message = 3
        tokens_per_name = 1
    elif model == "gpt-3.5-turbo-0301":
        tokens_per_message = 4  # every message follows <|start|>{role/name}\n{content}<|end|>\n
        tokens_per_name = -1  # if there's a name, the role is omitted
    elif "gpt-3.5-turbo" in model:
        print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.")
        return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613")
    elif "gpt-4" in model:
        print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.")
        return num_tokens_from_messages(messages, model="gpt-4-0613")
    else:
        raise NotImplementedError(
            f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens."""
        )
    num_tokens = 0
    for message in messages:
        num_tokens += tokens_per_message
        for key, value in message.items():
            num_tokens += len(encoding.encode(value))
            if key == "name":
                num_tokens += tokens_per_name
    num_tokens += 3  # every reply is primed with <|start|>assistant<|message|>
    return num_tokens

接下里我們使用上述函數(shù)來計算,不同的模型對同樣的輸入,消耗的token數(shù)是多少。文章來源地址http://www.zghlxwxcb.cn/news/detail-544506.html

# let's verify the function above matches the OpenAI API response

import openai

example_messages = [
    {
        "role": "system",
        "content": "You are a helpful, pattern-following assistant that translates corporate jargon into plain English.",
    },
    {
        "role": "system",
        "name": "example_user",
        "content": "New synergies will help drive top-line growth.",
    },
    {
        "role": "system",
        "name": "example_assistant",
        "content": "Things working well together will increase revenue.",
    },
    {
        "role": "system",
        "name": "example_user",
        "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage.",
    },
    {
        "role": "system",
        "name": "example_assistant",
        "content": "Let's talk later when we're less busy about how to do better.",
    },
    {
        "role": "user",
        "content": "This late pivot means we don't have time to boil the ocean for the client deliverable.",
    },
]

for model in [
    "gpt-3.5-turbo-0301",
    "gpt-3.5-turbo-0613",
    "gpt-3.5-turbo",
    "gpt-4-0314",
    "gpt-4-0613",
    "gpt-4",
    ]:
    print(model)
    # example token count from the function defined above
    print(f"{num_tokens_from_messages(example_messages, model)} prompt tokens counted by num_tokens_from_messages().")
    # example token count from the OpenAI API
    try:
        response = openai.ChatCompletion.create(
            model=model,
            messages=example_messages,
            temperature=0,
            max_tokens=1,  # we're only counting input tokens here, so let's not waste tokens on the output
        )
        print(f'{response["usage"]["prompt_tokens"]} prompt tokens counted by the OpenAI API.')
        print()
    except openai.error.OpenAIError as e:
        print(e)
        print()
gpt-3.5-turbo-0301
127 prompt tokens counted by num_tokens_from_messages().
127 prompt tokens counted by the OpenAI API.

gpt-3.5-turbo-0613
129 prompt tokens counted by num_tokens_from_messages().
129 prompt tokens counted by the OpenAI API.

gpt-3.5-turbo
Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.
129 prompt tokens counted by num_tokens_from_messages().
129 prompt tokens counted by the OpenAI API.

gpt-4-0314
129 prompt tokens counted by num_tokens_from_messages().
The model: `gpt-4-0314` does not exist

gpt-4-0613
129 prompt tokens counted by num_tokens_from_messages().
The model: `gpt-4-0613` does not exist

gpt-4
Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.
129 prompt tokens counted by num_tokens_from_messages().
The model: `gpt-4` does not exist

到了這里,關于Python遇上OpenAI系列教程【一】:如何格式化輸入到chatgpt模型的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • vscode中怎樣格式化js代碼_vscode如何格式化代碼

    vs code格式化代碼的快捷鍵如下: 在Mac上 Shift+ Option+F 在Ubuntu上 Ctrl+ Shift+I 但是自帶的格式化并不能滿足我的需求,這個時候,不得不說插件大法好。 代碼格式化為eslint風格 需要插件:eslint

    2024年02月16日
    瀏覽(97)
  • 【滲透測試之二進制安全系列】格式化漏洞揭秘(一)

    相信學習過C語言的童鞋兒們,都有接觸過比較基礎的輸入輸出函數(shù)(例如,scanf和printf等),那么對于%s、%d、%f、%c、%x等 格式化符號 應該并不會感到陌生。學習過匯編語言,并且有逆向工程基礎的童鞋兒們,應該都對C語言翻譯成匯編語言代碼的大概格式會有所了解! 我們

    2024年02月04日
    瀏覽(18)
  • vscode 無法格式化python代碼、無法格式化C++代碼(vscode格式化失效)另一種解決辦法:用外部工具yapf格式化(yapf工具)

    vscode 無法格式化python代碼、無法格式化C++代碼(vscode格式化失效)另一種解決辦法:用外部工具yapf格式化(yapf工具)

    神馬情況,我的vscode死活不能格式化python代碼,還有C++代碼也不能格式化,json代碼都能格式化,為啥到python、C++就不行了。。。。 (格式化json代碼) (格式化python代碼) 都無反應。。。 弄了半天解決不了。。。只能用外部工具解決了,就是麻煩點 搞了個外部工具yapf來格

    2024年02月05日
    瀏覽(96)
  • 【Python入門篇】——Python基礎語法(字符串格式化,表達式格式化和數(shù)據(jù)輸入)

    【Python入門篇】——Python基礎語法(字符串格式化,表達式格式化和數(shù)據(jù)輸入)

    作者簡介: 辭七七,目前大一,正在學習C/C++,Java,Python等 作者主頁: 七七的個人主頁 文章收錄專欄: Python入門,本專欄主要內(nèi)容為Python的基礎語法,Python中的選擇循環(huán)語句,Python函數(shù),Python的數(shù)據(jù)容器等。 歡迎大家點贊 ?? 收藏 ? 加關注哦!???? 目前通過%符號占位

    2024年02月05日
    瀏覽(30)
  • python格式化輸出

    python格式化輸出

    偶然看到一種格式化輸出這么寫的 可以看到 %(key)s 包圍的值可以通過字典對應的值指定 另外的格式化輸出還有 \\\"\\\".format() , \\\"\\\" % , f\\\"{}\\\"

    2024年02月05日
    瀏覽(15)
  • Python基礎—格式化輸出

    一、舊式字符串格式化方法 %s、%d、%f等被稱為占位符,%s對應字符串、%d對應整型、%f對應浮點型 %f -- 默認保留6位小數(shù) %.Nf -- N為整數(shù) %f -- 四舍五入 二、format方法 在字符串中寫{},字符串外調(diào)用format方法,在format方法中傳入值,按照一一對應的關系被寫入對應的{} 作用為說明

    2024年02月08日
    瀏覽(20)
  • Python time時間格式化

    Python time時間格式化

    Python提供了多個內(nèi)置模塊用于操作日期時間,像calendar,time,datetime。time模塊我在之前的文章已經(jīng)有所介紹,它提供 的接口與C標準庫time.h基本一致。相比于time模塊,datetime模塊的接口則更直觀、更容易調(diào)用。今天就來講講datetime模塊。 datetime模塊定義了兩個常量:datetime.MI

    2024年02月12日
    瀏覽(20)
  • Python基礎——format格式化

    ??在python中,我們在輸出字符串常用format方法設置一些特定的格式,以美化結(jié)果,同時便于更改字符串中指定內(nèi)容。本文總結(jié)了format的常用方法。 ??format通過字符串中的花括號{}來識別和替換字符串,由此達到格式化字符串的目的。填充內(nèi)容位置的識別,有按順序自動替

    2024年02月02日
    瀏覽(20)
  • ES 如何將國際標準時間格式進行格式化與調(diào)整時區(qū)

    ES 如何將國際標準時間格式進行格式化與調(diào)整時區(qū)

    ????????需求,日志收集的時候,時間格式是國際標準時間格式。形如yyyy-MM-dd\\\'T\\\'HH:mm:ss.SSS。 (2023-12-05T02:45:50.282Z)這個時區(qū)也不對,那如何將此類型的時間,進行格式化呢? ????????本篇文章體統(tǒng)一個案例,可以格式化各種類型的時間,已經(jīng)調(diào)整到各個時區(qū)。 ????

    2024年01月19日
    瀏覽(29)
  • 界面控件DevExpress WPF中文入門級教程 - 格式化值

    界面控件DevExpress WPF中文入門級教程 - 格式化值

    DevExpress WPF v21.2高速下載 格式化值概述 大多數(shù)DevExpress WPF控件都接受數(shù)據(jù)輸入,并盡可能為您的最終用戶提供鍵盤驅(qū)動的數(shù)據(jù)輸入 - 以及靈活的數(shù)據(jù)表示格式。 輕松應用數(shù)字或日期時間格式來編輯值,以提高 WPF 應用程序的可用性。 您可以使用標準或自定義格式說明符、復合

    2024年02月04日
    瀏覽(22)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包