国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Linear Regression in mojo with NDBuffer

這篇具有很好參考價(jià)值的文章主要介紹了Linear Regression in mojo with NDBuffer。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

The linear regression is the simplest machine learning algorithm. In this article I will use mojo NDBuffer to implement a simple linear regression algorithm from scratch. I will use NDArray class which was developed by in the previous article. First import the necessary libs and NDArray definition:

# common import
from String import String
from Bool import Bool
from List import VariadicList
from Buffer import NDBuffer
from List import DimList
from DType import DType
from Pointer import DTypePointer
from TargetInfo import dtype_sizeof, dtype_simd_width
from Index import StaticIntTuple
from Random import rand

alias nelts = dtype_simd_width[DType.f32]()

struct NDArray[rank:Int, dims:DimList, dtype:DType]:
    var ndb:NDBuffer[rank, dims, dtype]
    
    fn __init__(inout self, size:Int):
        let data = DTypePointer[dtype].alloc(size)
        self.ndb = NDBuffer[rank, dims, dtype](data)
        
    fn __getitem__(self, idxs:StaticIntTuple[rank]) -> SIMD[dtype,1]:
        return self.ndb.simd_load[1](idxs)
    
    fn __setitem__(self, idxs:StaticIntTuple[rank], val:SIMD[dtype,1]):
        self.ndb.simd_store[1](idxs, val)

Let’s assume we want to figure out this function:
y = W ? X y = W \cdot X y=W?X
Where:

  • W is the parameter
  • X is the sample design matrix. Each row is a sample. Each sample is a n dimension vector x ∈ R n \boldsymbol{x} \in R^{n} xRn. If we have m samples then X ∈ R m × n X \in R^{m \times n} XRm×n
    Here we will deal with a very simple toy problem. We assume n = 3 n=3 n=3 and m = 5 m=5 m=5. Let’s define the ith sample:
    x ( i ) = [ x 1 ( i ) x 2 ( i ) x 3 ( i ) ] ∈ R 3 × 1 , i ∈ { 1 , 2 , 3 , 4 , 5 } \boldsymbol{x}^{(i)} = \begin{bmatrix} x^{(i)}_1 \\ x^{(i)}_2 \\ x^{(i)}_3 \end{bmatrix} \in R^{3 \times 1}, i \in \{ 1, 2, 3, 4, 5\} x(i)= ?x1(i)?x2(i)?x3(i)?? ?R3×1,i{1,2,3,4,5}
    Notes:
  • i is the index of the sample;
  • 1, 2, 3 is the subscript of the feature dimension;
  • m is the total number of samples. In this case m=5;
  • n is the dimension of the feature vector. in this case n=3;

Let’s generate the dataset:

alias X_rank = 2
alias r1 = 5
alias r2 = 3
var X_size = r1 * r2
var X = NDArray[X_rank, DimList(r1, r2), DType.f32](X_size)
# geneate random number and set to X
var rvs = DTypePointer[DType.f32].alloc(X_size)
rand[DType.f32](rvs, X_size)
for d1 in range(r1):
    for d2 in range(r2):
        X[StaticIntTuple[X_rank](d1, d2)] = rvs.load(d1*r2+d2)*5.0 + 1.0

Let’s define the parameter w \boldsymbol{w} w:
w = [ 1.1 1.2 1.3 ] \boldsymbol{w} = \begin{bmatrix} 1.1 \\ 1.2 \\ 1.3 \end{bmatrix} w= ?1.11.21.3? ?

Let define the ground truth hypothesis function:
y = w T ? x + b = [ 1.1 2.2 3.3 ] ? [ x 1 x 2 x 3 ] + b = 1.1 ? x 1 + 2.2 ? x 2 + 3.3 ? x 3 + 1.8 y = \boldsymbol{w}^{T} \cdot \boldsymbol{x} + b =\begin{bmatrix} 1.1 \\ 2.2 \\ 3.3 \end{bmatrix} \cdot \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} + b = 1.1 \cdot x_{1} + 2.2 \cdot x_{2} + 3.3 \cdot x_{3} + 1.8 y=wT?x+b= ?1.12.23.3? ?? ?x1?x2?x3?? ?+b=1.1?x1?+2.2?x2?+3.3?x3?+1.8

Let’s define the paramter w \boldsymbol{w} w:

alias w_rank = 2
alias w_r1 = 3
alias w_r2 = 1
var w = NDArray[w_rank, DimList(w_r1, w_r2), DType.f32](w_r1 * w_r2)
w[StaticIntTuple[w_rank](0,0)] = 0.01
w[StaticIntTuple[w_rank](1,0)] = 0.02
w[StaticIntTuple[w_rank](2,0)] = 0.03
var b = SIMD[DType.f32, 1](0.0)

Now we can get the ground truch label ??:

alias y_rank = 1
alias y_r1 = 5
var y = NDArray[y_rank, DimList(y_r1), DType.f32](y_r1)
for d1 in range(y_r1):
    y[StaticIntTuple[y_rank](d1)] = 1.1 * X[StaticIntTuple[X_rank](d1,0)] + 
                2.2 * X[StaticIntTuple[X_rank](d1,1)] + 
                3.3 * X[StaticIntTuple[X_rank](d1,2)] + 1.8

Let define the function get_batch to get a mini batch from the training dataset:

alias batch_size = 2
alias batch_rank = 2
# idx can only be 0,1 We will ignore the last element in X.
fn get_batch(inout batch_X:NDArray[batch_rank, DimList(batch_size, r2), DType.f32],
             inout batch_y:NDArray[y_rank, DimList(batch_size), DType.f32],
             X:NDArray[X_rank, DimList(r1, r2), DType.f32], 
             y:NDArray[y_rank, DimList(y_r1), DType.f32],
             batch_idx:Int):
    for b_idx in range(batch_size):
        batch_y[StaticIntTuple[y_rank](b_idx)] = y[StaticIntTuple[y_rank]
        		(batch_size*batch_idx+b_idx)]
        for c_idx in range(r2):
            batch_X[StaticIntTuple[batch_rank](b_idx, c_idx)] = 
            		X[StaticIntTuple[X_rank](batch_size*batch_idx+b_idx,c_idx)]

Let’s discuss the math theory of linear regress. For ith sample we will omit the (i) subscript for simplicity. The calculated label y ^ \hat{y} y^?:
y ^ = w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b \hat{y} = w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b y^?=w1??x1?+w2??x2?+w3??x3?+b
As we have the ground truth label y we define the loss function as:
l = 1 2 ( y ^ ? y ) 2 = 1 2 ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) 2 \mathcal{l} = \frac{1}{2}(\hat{y}-y)^{2} = \frac{1}{2}(w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y)^{2} l=21?(y^??y)2=21?(w1??x1?+w2??x2?+w3??x3?+b?y)2
According to linear regression algorithm we will set random value to parameter w \boldsymbol{w} w and zero to b b b. We will calculate y by using these parameters setting. Then we calculate the loss which represent how good our parameters are. Our task is to find the best parameters setting to let the loss minmum:
arg ? min ? w , b l \arg\min_{\boldsymbol{w},b} \mathcal{l} argw,bmin?l

To get the minmum parameter we will get each individual parameter’s gradient of loss and adjust the parameter against the gradient direction. This is the gradient descent algorithm. So let get parameter w 1 w_{1} w1? gradient of loss:
? l ? w 1 = ? ( 1 2 ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) 2 ) ? w 1 = ? ( 1 2 ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) 2 ) ? ( ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ) ? ? ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ? w 1 = ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ? x 1 \frac{\partial{\mathcal{l}}}{\partial{w_{1}}} = \frac{\partial{ \big( \frac{1}{2}(w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y)^{2} \big) }}{\partial{w_{1}}} \\ = \frac{\partial{ \big( \frac{1}{2}(w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y)^{2} \big) }}{\partial{ \big( (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) \big) }} \cdot \frac{\partial{ (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) }} { \partial{w_{1}} } \\ = (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) \cdot x_{1} ?w1??l?=?w1??(21?(w1??x1?+w2??x2?+w3??x3?+b?y)2)?=?((w1??x1?+w2??x2?+w3??x3?+b?y))?(21?(w1??x1?+w2??x2?+w3??x3?+b?y)2)???w1??(w1??x1?+w2??x2?+w3??x3?+b?y)?=(w1??x1?+w2??x2?+w3??x3?+b?y)?x1?
We use the chain rule of gradient in the above formula. We can get all parameters gradient of loss in the same way:
? l ? w 1 = ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ? x 1 ? l ? w 2 = ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ? x 2 ? l ? w 3 = ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) ? x 3 ? l ? b = ( w 1 ? x 1 + w 2 ? x 2 + w 3 ? x 3 + b ? y ) \frac{\partial{\mathcal{l}}}{\partial{w_{1}}} = (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) \cdot x_{1} \\ \frac{\partial{\mathcal{l}}}{\partial{w_{2}}} = (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) \cdot x_{2} \\ \frac{\partial{\mathcal{l}}}{\partial{w_{3}}} = (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) \cdot x_{3} \\ \frac{\partial{\mathcal{l}}}{\partial} = (w_{1} \cdot x_{1} + w_{2} \cdot x_{2} + w_{3} \cdot x_{3} + b - y) ?w1??l?=(w1??x1?+w2??x2?+w3??x3?+b?y)?x1??w2??l?=(w1??x1?+w2??x2?+w3??x3?+b?y)?x2??w3??l?=(w1??x1?+w2??x2?+w3??x3?+b?y)?x3??b?l?=(w1??x1?+w2??x2?+w3??x3?+b?y)

Assume the learning rate is α \alpha α then we can update the parameters:
w 1 : = w 1 ? α ? l ? w 1 w 2 : = w 2 ? α ? l ? w 2 w 3 : = w 3 ? α ? l ? w 3 b : = b ? α ? l ? b w_{1} := w_{1} - \alpha \cdot \frac{\mathcal{l}}{\partial{w_{1}}} \\ w_{2} := w_{2} - \alpha \cdot \frac{\mathcal{l}}{\partial{w_{2}}} \\ w_{3} := w_{3} - \alpha \cdot \frac{\mathcal{l}}{\partial{w_{3}}} \\ b := b - \alpha \cdot \frac{\mathcal{l}}{\partial} w1?:=w1??α??w1?l?w2?:=w2??α??w2?l?w3?:=w3??α??w3?l?b:=b?α??bl?文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-468451.html

let epochs = 1000
var y_hat = NDArray[y_rank, DimList(batch_size), DType.f32](batch_size)
alias loss_rank = 1
var loss = NDArray[loss_rank, DimList(batch_size), DType.f32](batch_size)
var coff = 0.5
var Xi = NDArray[batch_rank, DimList(batch_size, r2), DType.f32](batch_size*r2)
var yi = NDArray[y_rank, DimList(batch_size), DType.f32](batch_size)
var lr = 0.001
for epoch in range(epochs):
    for bidx in range(batch_size):
        get_batch(Xi, yi, X, y, bidx)
        # forward pass
        y_hat[StaticIntTuple[y_rank](0)] = 
        			w[StaticIntTuple[w_rank](0,0)]*Xi[StaticIntTuple[X_rank](0,0)] + 
                    w[StaticIntTuple[w_rank](1,0)]*Xi[StaticIntTuple[X_rank](0,1)] + 
                    w[StaticIntTuple[w_rank](2,0)]*Xi[StaticIntTuple[X_rank](0,2)] +
                    b
        y_hat[StaticIntTuple[y_rank](1)] = 
        			w[StaticIntTuple[w_rank](0,0)]*Xi[StaticIntTuple[X_rank](1,0)] + 
                    w[StaticIntTuple[w_rank](1,0)]*Xi[StaticIntTuple[X_rank](1,1)] + 
                    w[StaticIntTuple[w_rank](2,0)]*Xi[StaticIntTuple[X_rank](1,2)] +
                    b
        # calculate the loss
        loss[StaticIntTuple[loss_rank](0)] = coff *(
                (y[StaticIntTuple[y_rank](0)]-y_hat[StaticIntTuple[y_rank](0)])*
                (y[StaticIntTuple[y_rank](0)]-y_hat[StaticIntTuple[y_rank](0)])
        )
        loss[StaticIntTuple[loss_rank](1)] = coff *(
                (y[StaticIntTuple[y_rank](1)]-y_hat[StaticIntTuple[y_rank](1)])*
                (y[StaticIntTuple[y_rank](1)]-
                y_hat[StaticIntTuple[y_rank](1)])
        )
        g_w1 = (y_hat[StaticIntTuple[y_rank](0)]-
        		y[StaticIntTuple[y_rank](0)])*Xi[StaticIntTuple[X_rank](0,0)] + 
                (y_hat[StaticIntTuple[y_rank](1)]-
                y[StaticIntTuple[y_rank](1)])*Xi[StaticIntTuple[X_rank](1,0)]
        w[StaticIntTuple[w_rank](0,0)] -= lr*g_w1
        g_w2 = (y_hat[StaticIntTuple[y_rank](0)]-
        		y[StaticIntTuple[y_rank](0)])*Xi[StaticIntTuple[X_rank](0,1)] + 
                (y_hat[StaticIntTuple[y_rank](1)]-
                y[StaticIntTuple[y_rank](1)])*Xi[StaticIntTuple[X_rank](1,1)]
        w[StaticIntTuple[w_rank](1,0)] -= lr*g_w2
        g_w3 = (y_hat[StaticIntTuple[y_rank](0)]-
        y[StaticIntTuple[y_rank](0)])*Xi[StaticIntTuple[X_rank](0,2)] + 
                (y_hat[StaticIntTuple[y_rank](1)]-
                y[StaticIntTuple[y_rank](1)])*Xi[StaticIntTuple[X_rank](1,2)]
        w[StaticIntTuple[w_rank](2,0)] -= lr*g_w3
        g_b = (y_hat[StaticIntTuple[y_rank](0)]-y[StaticIntTuple[y_rank](0)]) + 
                (y_hat[StaticIntTuple[y_rank](1)]-y[StaticIntTuple[y_rank](1)])
        b -= lr*g_b
        loss_val = loss[StaticIntTuple[loss_rank](0)] + 
        		loss[StaticIntTuple[loss_rank](0)]
        print('epoch_', epoch, ': idx=', bidx, ' loss=', loss_val, 
        		'; w1=', w[StaticIntTuple[w_rank](0,0)], 
              ', w2=', w[StaticIntTuple[w_rank](1,0)], 
              ', w3=', w[StaticIntTuple[w_rank](2,0)], ', b=', b, ';')

到了這里,關(guān)于Linear Regression in mojo with NDBuffer的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 吳恩達(dá)機(jī)器學(xué)習(xí)-可選實(shí)驗(yàn):使用ScikitLearn進(jìn)行線性回歸(Linear Regression using Scikit-Learn)

    吳恩達(dá)機(jī)器學(xué)習(xí)-可選實(shí)驗(yàn):使用ScikitLearn進(jìn)行線性回歸(Linear Regression using Scikit-Learn)

    有一個(gè)開(kāi)源的、商業(yè)上可用的機(jī)器學(xué)習(xí)工具包,叫做scikit-learn。這個(gè)工具包包含了你將在本課程中使用的許多算法的實(shí)現(xiàn)。 在本實(shí)驗(yàn)中,你將:利用scikit-learn實(shí)現(xiàn)使用梯度下降的線性回歸 您將使用scikit-learn中的函數(shù)以及matplotlib和NumPy。 np.set_printoptions(precision=2) 的作用是告訴

    2024年03月14日
    瀏覽(27)
  • 500mA High Voltage Linear Charger with OVP/OCP

    500mA High Voltage Linear Charger with OVP/OCP

    YHM2810 is a highly integrated, single-cell Li-ion battery charger with system power path management for space-limited portable applications. The full charger function features Trickle-charge, constant current fast charge and constant voltage regulation, charge termination, and auto recharge. YHM2810 can deliver up to 500mA charging current, be programmed ex

    2024年01月16日
    瀏覽(18)
  • Human Pose Regression with Residual Log-likelihood Estimation

    Human Pose Regression with Residual Log-likelihood Estimation

    ????????通過(guò)似然熱圖對(duì)輸出分布進(jìn)行建模的基于熱圖的方法在人體姿態(tài)估計(jì)領(lǐng)域占據(jù)主導(dǎo)地位。相比之下,基于回歸的方法更有效,但效果較差。?在這項(xiàng)工作中,我們探索了最大似然估計(jì)(MLE),以開(kāi)發(fā)一種高效有效的基于回歸的方法。從MLE的角度來(lái)看,采用不同的回

    2023年04月26日
    瀏覽(31)
  • 【論文筆記】Mamba: Linear-Time Sequence Modeling with Selective State Spaces

    【論文筆記】Mamba: Linear-Time Sequence Modeling with Selective State Spaces

    原文鏈接:https://arxiv.org/abs/2312.00752 基石模型(FM)的主干網(wǎng)絡(luò)通常是序列模型,處理任意的輸入序列。但現(xiàn)代FM主要基于Transformer這一序列模型,及其核心的注意力。但是,自注意力僅能在上下文窗口中密集地傳遞信息,而無(wú)法建模窗口外部的數(shù)據(jù);此外,其尺度與窗口長(zhǎng)度

    2024年04月26日
    瀏覽(20)
  • Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism

    Wise-IoU: Bounding Box Regression Loss with Dynamic Focusing Mechanism

    本文通過(guò)估計(jì)錨框的離群度定義一個(gè)動(dòng)態(tài)聚焦機(jī)制(FM) f(β),β = L I o U L I o U frac{L_{IoU}}{L_{IoU}} L I o U ? L I o U ? ? 。FM通過(guò)將小梯度增益分配到具有小β的高質(zhì)量錨框,使錨框回歸能夠?qū)W⒂谄胀ㄙ|(zhì)量的錨框。 同時(shí),該機(jī)制將小梯度增益分配給β較大的低質(zhì)量錨箱,有效削

    2024年02月12日
    瀏覽(16)
  • 阿里團(tuán)隊(duì)輕量級(jí)語(yǔ)義分割框架——AFFormer:Head-Free Lightweight Semantic Segmentation with Linear Transformer

    阿里團(tuán)隊(duì)輕量級(jí)語(yǔ)義分割框架——AFFormer:Head-Free Lightweight Semantic Segmentation with Linear Transformer

    代碼地址:dongbo811/AFFormer (github.com) 文章地址 現(xiàn)有的語(yǔ)義分割工作主要集中在設(shè)計(jì)有效的解碼器上;然而, 整體結(jié)構(gòu)引入的計(jì)算負(fù)載長(zhǎng)期被忽視 ,阻礙了其在資源受限硬件上的應(yīng)用。本文提出了一種專門(mén)用于語(yǔ)義分割的 無(wú)頭輕量級(jí)架構(gòu) ,命名為自適應(yīng)頻率Transformer( AFForme

    2024年02月04日
    瀏覽(40)
  • Bias in Emotion Recognition with ChatGPT

    本文是LLM系列文章,針對(duì)《Bias in Emotion Recognition with ChatGPT》的翻譯。 本技術(shù)報(bào)告探討了ChatGPT從文本中識(shí)別情緒的能力,這可以作為交互式聊天機(jī)器人、數(shù)據(jù)注釋和心理健康分析等各種應(yīng)用程序的基礎(chǔ)。雖然先前的研究已經(jīng)表明ChatGPT在情緒分析方面的基本能力,但它在更細(xì)微

    2024年02月07日
    瀏覽(21)
  • 《論文閱讀07》Segment Anything in 3D with NeRFs

    《論文閱讀07》Segment Anything in 3D with NeRFs

    研究領(lǐng)域:圖像分割(3D) 論文:Segment Anything in 3D with NeRFs Submitted on 24 Apr 2023 (v1), last revised 1 Jun 2023 (this version, v3) Computer Vision and Pattern Recognition (cs.CV) nvos數(shù)據(jù)集 論文鏈接 使用NeRFs在3D中分割任何內(nèi)容 摘要 最近,Segment Anything Model(SAM)作為一種強(qiáng)大的視覺(jué)基礎(chǔ)模型出現(xiàn),它能

    2024年02月16日
    瀏覽(22)
  • gitlab提交項(xiàng)目Log in with Access Token錯(cuò)誤

    gitlab提交項(xiàng)目Log in with Access Token錯(cuò)誤

    目錄 報(bào)錯(cuò)信息 問(wèn)題描述 解決方案 在提交項(xiàng)目到gitlab時(shí),需要添加賬戶信息?,但是報(bào)了這樣一個(gè)錯(cuò),原因應(yīng)該就是路徑問(wèn)題,我在填寫(xiě)server地址的時(shí)候,就出現(xiàn)了路徑問(wèn)題,我把多余的幾個(gè)/去掉之后,才訪問(wèn)到我的gitalb的指定頁(yè)面。 所以我猜想下面json發(fā)送失敗也是因?yàn)槁?/p>

    2024年02月11日
    瀏覽(30)
  • Transfer learning in computer vision with TensorFlow Hu

    作者:禪與計(jì)算機(jī)程序設(shè)計(jì)藝術(shù) Transfer learning is a machine learning technique that allows a model to learn new knowledge from an existing trained model on a similar task. Transfer learning can be useful for a variety of tasks such as image classification, object detection, and speech recognition. However, transfer learning has its own set of c

    2024年02月07日
    瀏覽(19)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包