国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別

這篇具有很好參考價(jià)值的文章主要介紹了深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

文章相關(guān)知識(shí)點(diǎn):???????AI遮天傳 DL-回歸與分類(lèi)_老師我作業(yè)忘帶了的博客-CSDN博客

?

MNIST數(shù)據(jù)集?

MNIST手寫(xiě)數(shù)字?jǐn)?shù)據(jù)集是機(jī)器學(xué)習(xí)領(lǐng)域中廣泛使用的圖像分類(lèi)數(shù)據(jù)集。它包含60,000個(gè)訓(xùn)練樣本和10,000個(gè)測(cè)試樣本。這些數(shù)字已進(jìn)行尺寸規(guī)格化,并在固定尺寸的圖像中居中。每個(gè)樣本都是一個(gè)784×1的矩陣,是從原始的28×28灰度圖像轉(zhuǎn)換而來(lái)的。MNIST中的數(shù)字范圍是0到9。下面顯示了一些示例。 注意:在訓(xùn)練期間,切勿以任何形式使用有關(guān)測(cè)試樣本的信息。

深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別

代碼清單

  • data/?文件夾:存放MNIST數(shù)據(jù)集。下載數(shù)據(jù),解壓后存放于該文件夾下。下載鏈接:MNIST handwritten digit database, Yann LeCun, Corinna Cortes and Chris Burges
  • solver.py?這個(gè)文件中實(shí)現(xiàn)了訓(xùn)練和測(cè)試的流程。;
  • dataloader.py?實(shí)現(xiàn)了數(shù)據(jù)加載器,可用于準(zhǔn)備數(shù)據(jù)以進(jìn)行訓(xùn)練和測(cè)試;
  • visualize.py?實(shí)現(xiàn)了plot_loss_and_acc函數(shù),該函數(shù)可用于繪制損失和準(zhǔn)確率曲線;
  • optimizer.py 實(shí)現(xiàn)帶momentum的SGD優(yōu)化器,可用于執(zhí)行參數(shù)更新;
  • loss.py 實(shí)現(xiàn)softmax_cross_entropy_loss,包含loss的計(jì)算和梯度計(jì)算;
  • runner.ipynb?完成所有代碼后的執(zhí)行文件,執(zhí)行訓(xùn)練和測(cè)試過(guò)程。

要求

  1. 記錄訓(xùn)練和測(cè)試的準(zhǔn)確率。畫(huà)出訓(xùn)練損失和準(zhǔn)確率曲線;
  2. 比較使用和不使用momentum結(jié)果的不同,可以從訓(xùn)練時(shí)間,收斂性和準(zhǔn)確率等方面討論差異;
  3. 調(diào)整其他超參數(shù),如學(xué)習(xí)率,Batchsize等,觀察這些超參數(shù)如何影響分類(lèi)性能。寫(xiě)下觀察結(jié)果并將這些新結(jié)果記錄在報(bào)告中。

運(yùn)行結(jié)果如下:

深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別

深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別

深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別

代碼如下:

solver.py

import numpy as np

from layers import FCLayer

from dataloader import build_dataloader
from network import Network
from optimizer import SGD
from loss import SoftmaxCrossEntropyLoss
from visualize import plot_loss_and_acc

class Solver(object):
    def __init__(self, cfg):
        self.cfg = cfg

        # build dataloader
        train_loader, val_loader, test_loader = self.build_loader(cfg)
        self.train_loader = train_loader
        self.val_loader = val_loader
        self.test_loader = test_loader

        # build model
        self.model = self.build_model(cfg)

        # build optimizer
        self.optimizer = self.build_optimizer(self.model, cfg)

        # build evaluation criterion
        self.criterion = SoftmaxCrossEntropyLoss()

    @staticmethod
    def build_loader(cfg):
        train_loader = build_dataloader(
            cfg['data_root'], cfg['max_epoch'], cfg['batch_size'], shuffle=True, mode='train')

        val_loader = build_dataloader(
            cfg['data_root'], 1, cfg['batch_size'], shuffle=False, mode='val')

        test_loader = build_dataloader(
            cfg['data_root'], 1, cfg['batch_size'], shuffle=False, mode='test')

        return train_loader, val_loader, test_loader

    @staticmethod
    def build_model(cfg):
        model = Network()
        model.add(FCLayer(784, 10))
        return model

    @staticmethod
    def build_optimizer(model, cfg):
        return SGD(model, cfg['learning_rate'], cfg['momentum'])

    def train(self):
        max_epoch = self.cfg['max_epoch']

        epoch_train_loss, epoch_train_acc = [], []
        for epoch in range(max_epoch):

            iteration_train_loss, iteration_train_acc = [], []
            for iteration, (images, labels) in enumerate(self.train_loader):
                # forward pass
                logits = self.model.forward(images)
                loss, acc = self.criterion.forward(logits, labels)

                # backward_pass
                delta = self.criterion.backward()
                self.model.backward(delta)

                # updata the model weights
                self.optimizer.step()

                # restore loss and accuracy
                iteration_train_loss.append(loss)
                iteration_train_acc.append(acc)

                # display iteration training info
                if iteration % self.cfg['display_freq'] == 0:
                    print("Epoch [{}][{}]\t Batch [{}][{}]\t Training Loss {:.4f}\t Accuracy {:.4f}".format(
                        epoch, max_epoch, iteration, len(self.train_loader), loss, acc))

            avg_train_loss, avg_train_acc = np.mean(iteration_train_loss), np.mean(iteration_train_acc)
            epoch_train_loss.append(avg_train_loss)
            epoch_train_acc.append(avg_train_acc)

            # validate
            avg_val_loss, avg_val_acc = self.validate()

            # display epoch training info
            print('\nEpoch [{}]\t Average training loss {:.4f}\t Average training accuracy {:.4f}'.format(
                epoch, avg_train_loss, avg_train_acc))

            # display epoch valiation info
            print('Epoch [{}]\t Average validation loss {:.4f}\t Average validation accuracy {:.4f}\n'.format(
                epoch, avg_val_loss, avg_val_acc))

        return epoch_train_loss, epoch_train_acc

    def validate(self):
        logits_set, labels_set = [], []
        for images, labels in self.val_loader:
            logits = self.model.forward(images)
            logits_set.append(logits)
            labels_set.append(labels)

        logits = np.concatenate(logits_set)
        labels = np.concatenate(labels_set)
        loss, acc = self.criterion.forward(logits, labels)
        return loss, acc

    def test(self):
        logits_set, labels_set = [], []
        for images, labels in self.test_loader:
            logits = self.model.forward(images)
            logits_set.append(logits)
            labels_set.append(labels)

        logits = np.concatenate(logits_set)
        labels = np.concatenate(labels_set)
        loss, acc = self.criterion.forward(logits, labels)
        return loss, acc


if __name__ == '__main__':
    # You can modify the hyerparameters by yourself.
    relu_cfg = {
        'data_root': 'data',
        'max_epoch': 10,
        'batch_size': 100,
        'learning_rate': 0.1,
        'momentum': 0.9,
        'display_freq': 50,
        'activation_function': 'relu',
    }

    runner = Solver(relu_cfg)
    relu_loss, relu_acc = runner.train()

    test_loss, test_acc = runner.test()
    print('Final test accuracy {:.4f}\n'.format(test_acc))

    # You can modify the hyerparameters by yourself.
    sigmoid_cfg = {
        'data_root': 'data',
        'max_epoch': 10,
        'batch_size': 100,
        'learning_rate': 0.1,
        'momentum': 0.9,
        'display_freq': 50,
        'activation_function': 'sigmoid',
    }

    runner = Solver(sigmoid_cfg)
    sigmoid_loss, sigmoid_acc = runner.train()

    test_loss, test_acc = runner.test()
    print('Final test accuracy {:.4f}\n'.format(test_acc))

    plot_loss_and_acc({
        "relu": [relu_loss, relu_acc],
        "sigmoid": [sigmoid_loss, sigmoid_acc],
    })

dataloader.py

import os
import struct
import numpy as np


class Dataset(object):

    def __init__(self, data_root, mode='train', num_classes=10):
        assert mode in ['train', 'val', 'test']

        # load images and labels
        kind = {'train': 'train', 'val': 'train', 'test': 't10k'}[mode]
        labels_path = os.path.join(data_root, '{}-labels-idx1-ubyte'.format(kind))
        images_path = os.path.join(data_root, '{}-images-idx3-ubyte'.format(kind))

        with open(labels_path, 'rb') as lbpath:
            magic, n = struct.unpack('>II', lbpath.read(8))
            labels = np.fromfile(lbpath, dtype=np.uint8)

        with open(images_path, 'rb') as imgpath:
            magic, num, rows, cols = struct.unpack('>IIII', imgpath.read(16))
            images = np.fromfile(imgpath, dtype=np.uint8).reshape(len(labels), 784)

        if mode == 'train':
            # training images and labels
            self.images = images[:55000]  # shape: (55000, 784)
            self.labels = labels[:55000]  # shape: (55000,)

        elif mode == 'val':
            # validation images and labels
            self.images = images[55000:]  # shape: (5000, 784)
            self.labels = labels[55000:]  # shape: (5000, )

        else:
            # test data
            self.images = images  # shape: (10000, 784)
            self.labels = labels  # shape: (10000, )

        self.num_classes = 10

    def __len__(self):
        return len(self.images)

    def __getitem__(self, idx):
        image = self.images[idx]
        label = self.labels[idx]

        # Normalize from [0, 255.] to [0., 1.0], and then subtract by the mean value
        image = image / 255.0
        image = image - np.mean(image)

        return image, label


class IterationBatchSampler(object):

    def __init__(self, dataset, max_epoch, batch_size=2, shuffle=True):
        self.dataset = dataset
        self.batch_size = batch_size
        self.shuffle = shuffle

    def prepare_epoch_indices(self):
        indices = np.arange(len(self.dataset))

        if self.shuffle:
            np.random.shuffle(indices)

        num_iteration = len(indices) // self.batch_size + int(len(indices) % self.batch_size)
        self.batch_indices = np.split(indices, num_iteration)

    def __iter__(self):
        return iter(self.batch_indices)

    def __len__(self):
        return len(self.batch_indices)


class Dataloader(object):

    def __init__(self, dataset, sampler):
        self.dataset = dataset
        self.sampler = sampler

    def __iter__(self):
        self.sampler.prepare_epoch_indices()

        for batch_indices in self.sampler:
            batch_images = []
            batch_labels = []
            for idx in batch_indices:
                img, label = self.dataset[idx]
                batch_images.append(img)
                batch_labels.append(label)

            batch_images = np.stack(batch_images)
            batch_labels = np.stack(batch_labels)

            yield batch_images, batch_labels

    def __len__(self):
        return len(self.sampler)


def build_dataloader(data_root, max_epoch, batch_size, shuffle=False, mode='train'):
    dataset = Dataset(data_root, mode)
    sampler = IterationBatchSampler(dataset, max_epoch, batch_size, shuffle)
    data_lodaer = Dataloader(dataset, sampler)
    return data_lodaer

loss.py

import numpy as np

# a small number to prevent dividing by zero, maybe useful for you
EPS = 1e-11

class SoftmaxCrossEntropyLoss(object):

    def forward(self, logits, labels):
        """
          Inputs: (minibatch)
          - logits: forward results from the last FCLayer, shape (batch_size, 10)
          - labels: the ground truth label, shape (batch_size, )
        """

        ############################################################################
        # TODO: Put your code here
        # Calculate the average accuracy and loss over the minibatch
        # Return the loss and acc, which will be used in solver.py
        # Hint: Maybe you need to save some arrays for backward

        self.one_hot_labels = np.zeros_like(logits)
        self.one_hot_labels[np.arange(len(logits)), labels] = 1

        self.prob = np.exp(logits) / (EPS + np.exp(logits).sum(axis=1, keepdims=True))

        # calculate the accuracy
        preds = np.argmax(self.prob, axis=1) # self.prob, not logits.
        acc = np.mean(preds == labels)

        # calculate the loss
        loss = np.sum(-self.one_hot_labels * np.log(self.prob + EPS), axis=1)
        loss = np.mean(loss)
        ############################################################################

        return loss, acc

    def backward(self):

        ############################################################################
        # TODO: Put your code here
        # Calculate and return the gradient (have the same shape as logits)

        return self.prob - self.one_hot_labels
        ############################################################################

network.py

class Network(object):
    def __init__(self):
        self.layerList = []
        self.numLayer = 0

    def add(self, layer):
        self.numLayer += 1
        self.layerList.append(layer)

    def forward(self, x):
        # forward layer by layer
        for i in range(self.numLayer):
            x = self.layerList[i].forward(x)
        return x

    def backward(self, delta):
        # backward layer by layer
        for i in reversed(range(self.numLayer)): # reversed
            delta = self.layerList[i].backward(delta)

optimizer.py

import numpy as np

class SGD(object):
    def __init__(self, model, learning_rate, momentum=0.0):
        self.model = model
        self.learning_rate = learning_rate
        self.momentum = momentum

    def step(self):
        """One backpropagation step, update weights layer by layer"""

        layers = self.model.layerList
        for layer in layers:
            if layer.trainable:

                ############################################################################
                # TODO: Put your code here
                # Calculate diff_W and diff_b using layer.grad_W and layer.grad_b.
                # You need to add momentum to this.

                # Weight update with momentum
                if not hasattr(layer, 'diff_W'):
                    layer.diff_W = 0.0

                layer.diff_W = layer.grad_W + self.momentum * layer.diff_W
                layer.diff_b = layer.grad_b

                layer.W += -self.learning_rate * layer.diff_W
                layer.b += -self.learning_rate * layer.diff_b

                # # Weight update without momentum
                # layer.W += -self.learning_rate * layer.grad_W
                # layer.b += -self.learning_rate * layer.grad_b

                ############################################################################

visualize.py文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-400262.html

import matplotlib.pyplot as plt
import numpy as np

def plot_loss_and_acc(loss_and_acc_dict):
    # visualize loss curve
    plt.figure()

    min_loss, max_loss = 100.0, 0.0
    for key, (loss_list, acc_list) in loss_and_acc_dict.items():
        min_loss = min(loss_list) if min(loss_list) < min_loss else min_loss
        max_loss = max(loss_list) if max(loss_list) > max_loss else max_loss

        num_epoch = len(loss_list)
        plt.plot(range(1, 1 + num_epoch), loss_list, '-s', label=key)

    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend()
    plt.xticks(range(0, num_epoch + 1, 2))
    plt.axis([0, num_epoch + 1, min_loss - 0.1, max_loss + 0.1])
    plt.show()

    # visualize acc curve
    plt.figure()

    min_acc, max_acc = 1.0, 0.0
    for key, (loss_list, acc_list) in loss_and_acc_dict.items():
        min_acc = min(acc_list) if min(acc_list) < min_acc else min_acc
        max_acc = max(acc_list) if max(acc_list) > max_acc else max_acc

        num_epoch = len(acc_list)
        plt.plot(range(1, 1 + num_epoch), acc_list, '-s', label=key)

    plt.xlabel('Epoch')
    plt.ylabel('Accuracy')
    plt.legend()
    plt.xticks(range(0, num_epoch + 1, 2))
    plt.axis([0, num_epoch + 1, min_acc, 1.0])
    plt.show()

到了這里,關(guān)于深度學(xué)習(xí)實(shí)驗(yàn):Softmax實(shí)現(xiàn)手寫(xiě)數(shù)字識(shí)別的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • [深度學(xué)習(xí)實(shí)戰(zhàn)]基于PyTorch的深度學(xué)習(xí)實(shí)戰(zhàn)(下)[Mnist手寫(xiě)數(shù)字圖像識(shí)別]

    [深度學(xué)習(xí)實(shí)戰(zhàn)]基于PyTorch的深度學(xué)習(xí)實(shí)戰(zhàn)(下)[Mnist手寫(xiě)數(shù)字圖像識(shí)別]

    PyTorch——開(kāi)源的Python機(jī)器學(xué)習(xí)庫(kù) ??首先感謝所有點(diǎn)開(kāi)本文的朋友們!基于PyTorch的深度學(xué)習(xí)實(shí)戰(zhàn)可能要告一段落了。本想著再寫(xiě)幾篇關(guān)于 PyTorch神經(jīng)網(wǎng)絡(luò)深度學(xué)習(xí) 的文章來(lái)著,可無(wú)奈項(xiàng)目時(shí)間緊任務(wù)重,要求 短時(shí)間內(nèi)出圖并做好參數(shù)擬合 。所以只得轉(zhuǎn)戰(zhàn) Matlab 編程,框架舊

    2024年02月16日
    瀏覽(27)
  • python與深度學(xué)習(xí)(一):ANN和手寫(xiě)數(shù)字識(shí)別

    python與深度學(xué)習(xí)(一):ANN和手寫(xiě)數(shù)字識(shí)別

    神經(jīng)網(wǎng)絡(luò)是學(xué)者通過(guò)對(duì)生物神經(jīng)元的研究,提出了模擬生物神經(jīng)元機(jī)制的人工神經(jīng)網(wǎng)絡(luò)的數(shù)學(xué)模型,生物神經(jīng)元的模型抽象為如圖所示的數(shù)學(xué)結(jié)構(gòu)。 神經(jīng)元輸入向量?? = [??1, ????2, ??3, … , ????]T,經(jīng)過(guò)函數(shù)映射:??: ?? → ??后得到輸出??。 考慮一種簡(jiǎn)化的情況,

    2024年02月16日
    瀏覽(17)
  • python與深度學(xué)習(xí)(六):CNN和手寫(xiě)數(shù)字識(shí)別二

    python與深度學(xué)習(xí)(六):CNN和手寫(xiě)數(shù)字識(shí)別二

    本篇文章是對(duì)上篇文章訓(xùn)練的模型進(jìn)行測(cè)試。首先是將訓(xùn)練好的模型進(jìn)行重新加載,然后采用opencv對(duì)圖片進(jìn)行加載,最后將加載好的圖片輸送給模型并且顯示結(jié)果。 在這里導(dǎo)入需要的第三方庫(kù)如cv2,如果沒(méi)有,則需要自行下載。 把MNIST數(shù)據(jù)集進(jìn)行加載,并且把訓(xùn)練好的模型也

    2024年02月15日
    瀏覽(25)
  • 從手寫(xiě)數(shù)字識(shí)別入門(mén)深度學(xué)習(xí)丨MNIST數(shù)據(jù)集詳解

    從手寫(xiě)數(shù)字識(shí)別入門(mén)深度學(xué)習(xí)丨MNIST數(shù)據(jù)集詳解

    就像無(wú)數(shù)人從敲下“Hello World”開(kāi)始代碼之旅一樣,許多研究員從“MNIST數(shù)據(jù)集”開(kāi)啟了人工智能的探索之路。 MNIST數(shù)據(jù)集(Mixed National Institute of Standards and Technology database)是一個(gè)用來(lái)訓(xùn)練各種圖像處理系統(tǒng)的二進(jìn)制圖像數(shù)據(jù)集,廣泛應(yīng)用于機(jī)器學(xué)習(xí)中的訓(xùn)練和測(cè)試。 作為一

    2024年02月03日
    瀏覽(21)
  • 【深度學(xué)習(xí)】基于華為MindSpore的手寫(xiě)體圖像識(shí)別實(shí)驗(yàn)

    【深度學(xué)習(xí)】基于華為MindSpore的手寫(xiě)體圖像識(shí)別實(shí)驗(yàn)

    1 實(shí)驗(yàn)介紹 1.1 簡(jiǎn)介 Mnist手寫(xiě)體圖像識(shí)別實(shí)驗(yàn)是深度學(xué)習(xí)入門(mén)經(jīng)典實(shí)驗(yàn)。Mnist數(shù)據(jù)集包含60,000個(gè)用于訓(xùn)練的示例和10,000個(gè)用于測(cè)試的示例。這些數(shù)字已經(jīng)過(guò)尺寸標(biāo)準(zhǔn)化并位于圖像中心,圖像是固定大小(28x28像素),其值為0到255。為簡(jiǎn)單起見(jiàn),每個(gè)圖像都被平展并轉(zhuǎn)換為784(28*28)個(gè)

    2023年04月08日
    瀏覽(29)
  • 【深度學(xué)習(xí)實(shí)戰(zhàn)—1】:基于Keras的手寫(xiě)數(shù)字識(shí)別(非常詳細(xì)、代碼開(kāi)源)

    【深度學(xué)習(xí)實(shí)戰(zhàn)—1】:基于Keras的手寫(xiě)數(shù)字識(shí)別(非常詳細(xì)、代碼開(kāi)源)

    ?博客主頁(yè):王樂(lè)予?? ?年輕人要:Living for the moment(活在當(dāng)下)!?? ??推薦專欄:【圖像處理】【千錘百煉Python】【深度學(xué)習(xí)】【排序算法】 ?? 本來(lái)想著多更新一些關(guān)于深度學(xué)習(xí)的文章,但這方面知識(shí)專業(yè)度很高,如果作者本身都掌握不好,又怎么能寫(xiě)出好文章分享

    2024年02月07日
    瀏覽(23)
  • 人工智能概論報(bào)告-基于PyTorch的深度學(xué)習(xí)手寫(xiě)數(shù)字識(shí)別模型研究與實(shí)踐

    人工智能概論報(bào)告-基于PyTorch的深度學(xué)習(xí)手寫(xiě)數(shù)字識(shí)別模型研究與實(shí)踐

    本文是我人工智能概論的課程大作業(yè)實(shí)踐應(yīng)用報(bào)告,可供各位同學(xué)參考,內(nèi)容寫(xiě)的及其水,部分也借助了gpt自動(dòng)生成,排版等也基本做好,大家可以參照。如果有需要word版的可以私信我,或者在評(píng)論區(qū)留下郵箱,我會(huì)逐個(gè)發(fā)給。word版是我最后提交的,已經(jīng)調(diào)整統(tǒng)一了全文格

    2024年02月05日
    瀏覽(110)
  • 基于深度學(xué)習(xí)的手寫(xiě)數(shù)字識(shí)別項(xiàng)目GUI(Deep Learning Project – Handwritten Digit Recognition using Python)

    基于深度學(xué)習(xí)的手寫(xiě)數(shù)字識(shí)別項(xiàng)目GUI(Deep Learning Project – Handwritten Digit Recognition using Python)

    一步一步教你建立手寫(xiě)數(shù)字識(shí)別項(xiàng)目,需要源文件的請(qǐng)可直接跳轉(zhuǎn)下邊的鏈接:All project 在本文中,我們將使用MNIST數(shù)據(jù)集實(shí)現(xiàn)一個(gè)手寫(xiě)數(shù)字識(shí)別應(yīng)用程序。我們將使用一種特殊類(lèi)型的深度神經(jīng)網(wǎng)絡(luò),即卷積神經(jīng)網(wǎng)絡(luò)。最后,我們將構(gòu)建一個(gè)GUI,您可以在其中繪制數(shù)字并立即

    2024年02月11日
    瀏覽(21)
  • 手寫(xiě)數(shù)字識(shí)別--神經(jīng)網(wǎng)絡(luò)實(shí)驗(yàn)

    手寫(xiě)數(shù)字識(shí)別--神經(jīng)網(wǎng)絡(luò)實(shí)驗(yàn)

    ?我自己搞的代碼,預(yù)測(cè)精度才94% 神經(jīng)網(wǎng)絡(luò)實(shí)驗(yàn)報(bào)告源碼.zip - 藍(lán)奏云 ?老師給的實(shí)驗(yàn)源碼答案和資料,預(yù)測(cè)精度高達(dá)99% 深度學(xué)習(xí)實(shí)驗(yàn)報(bào)告.zip - 藍(lán)奏云 上深度學(xué)習(xí)的課程,老師布置了一個(gè)經(jīng)典的實(shí)驗(yàn)報(bào)告,我做了好久才搞懂,所以把實(shí)驗(yàn)報(bào)告放到CSDN保存,自己忘了方便查閱

    2024年02月06日
    瀏覽(22)
  • 【手寫(xiě)數(shù)字識(shí)別】數(shù)據(jù)挖掘?qū)嶒?yàn)二

    【手寫(xiě)數(shù)字識(shí)別】數(shù)據(jù)挖掘?qū)嶒?yàn)二

    用PyTorch實(shí)現(xiàn)MNIST手寫(xiě)數(shù)字識(shí)別(最新,非常詳細(xì)) 圖像識(shí)別 (Image Recognition)是指利用計(jì)算機(jī)對(duì)圖像進(jìn)行處理、分析和理解,以識(shí)別各種不同模式的目標(biāo)和對(duì)象的技術(shù)。 圖像識(shí)別的發(fā)展經(jīng)歷了三個(gè)階段:文字識(shí)別、數(shù)字圖像處理與識(shí)別、物體識(shí)別。機(jī)器學(xué)習(xí)領(lǐng)域一般將此類(lèi)

    2024年02月07日
    瀏覽(17)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包