国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案)

這篇具有很好參考價(jià)值的文章主要介紹了使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

1.模型權(quán)重

第一次做Deoldify模型的復(fù)原,由于對(duì)該模型的使用不太清晰,搜索了一篇文章來(lái)查看,文章如下:https://blog.csdn.net/weixin_42512684/article/details/117376885??

文章發(fā)布于21年,但是其作者提供的代碼也是幫助解決了很多問(wèn)題,但是由于作者所提供的模型如stable類型的模型已經(jīng)無(wú)法下載了,這里貼出我收集的三類模型。

處理視頻文件的話,使用Video權(quán)重就行。

使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

模型權(quán)重(百度網(wǎng)盤(pán)):

https://pan.baidu.com/s/1Oadr1qk6vWQpokFwsprH8Q?pwd=iybk

2.代碼問(wèn)題:

先去這個(gè)網(wǎng)址下獲取項(xiàng)目的壓縮包:https://github.com/jantic/DeOldify使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

解壓壓縮包到對(duì)應(yīng)項(xiàng)目位置。

pip install -r requirements.txt

首先,先下載所需要的各種包,當(dāng)然這個(gè)包不齊全會(huì)缺少好幾個(gè)包,需要后續(xù)根據(jù)缺少的名字來(lái)pip install 一下,不知道包的pip指令可以在網(wǎng)上查詢。

直接使用該作者之前提供的運(yùn)行的代碼遇到了這個(gè)問(wèn)題:使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

修改運(yùn)行的代碼如下即可解決問(wèn)題:

from deoldify import device
from deoldify.device_id import DeviceId

#choices:  CPU, GPU0...GPU7
device.set(device=DeviceId.GPU0)

from deoldify.visualize import *
plt.style.use('dark_background')
import warnings
warnings.filterwarnings("ignore", category=UserWarning, message=".*?Your .*? set is empty.*?")

colorizer = get_video_colorizer(workfolder='.')


#NOTE:  Max is 44 with 11GB video cards.  21 is a good default
render_factor= 21

file_name = 'video'

file_name_ext = file_name + '.mp4'
# result_path = Path('E:./video_result.mp4')

def progress_bar(progress, message):
    print(f"{message}: {progress}%")
colorizer.colorize_from_file_name(file_name_ext, render_factor=render_factor, g_process_bar=progress_bar)
# colorizer.save_colorized_video(Path(file_name_ext), result_path)

再往下運(yùn)行就會(huì)遇到一種同一類但需要修改的地方十分多的問(wèn)題,如下所示,還有其它bug,但是沒(méi)必要一一列出:

 File "E:\***\******\sd-webui-deoldify-main\deoldify\visualize.py", line 237, in _extract_raw_frames
    bwframes_folder = self.bwframes_root / (source_path.stem)
                      ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for /: 'str' and 'str'

可以發(fā)現(xiàn)visualize.py存在諸多需要修改的地方,我將修改好的visualize.py貼在下方,這是我個(gè)人的一個(gè)修改方式:

from fastai.core import *
from fastai.vision import *
from matplotlib.axes import Axes
from .filters import IFilter, MasterFilter, ColorizerFilter
from .generators import gen_inference_deep, gen_inference_wide
from PIL import Image
import ffmpeg
import yt_dlp as youtube_dl
import gc
import requests
from io import BytesIO
import base64
import cv2
import logging
import gradio as gr
from pathlib import Path

class ModelImageVisualizer:
    def __init__(self, filter: IFilter, results_dir: str = None):
        self.filter = filter
        self.results_dir = None if results_dir is None else Path(results_dir)
        self.results_dir.mkdir(parents=True, exist_ok=True)

    def _clean_mem(self):
        torch.cuda.empty_cache()
        # gc.collect()

    def _open_pil_image(self, path: Path) -> Image:
        return PIL.Image.open(path).convert('RGB')

    def _get_image_from_url(self, url: str) -> Image:
        response = requests.get(url, timeout=30, headers={'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'})
        img = PIL.Image.open(BytesIO(response.content)).convert('RGB')
        return img

    def plot_transformed_image_from_url(
        self,
        url: str,
        path: str = 'test_images/image.png',
        results_dir:Path = None,
        figsize: Tuple[int, int] = (20, 20),
        render_factor: int = None,
        
        display_render_factor: bool = False,
        compare: bool = False,
        post_process: bool = True,
    ) -> Path:
        img = self._get_image_from_url(url)
        img.save(path)
        return self.plot_transformed_image(
            path=path,
            results_dir=results_dir,
            figsize=figsize,
            render_factor=render_factor,
            display_render_factor=display_render_factor,
            compare=compare,
            post_process = post_process,
        )

    def plot_transformed_image(
        self,
        path: str,
        results_dir:Path = None,
        figsize: Tuple[int, int] = (20, 20),
        render_factor: int = None,
        display_render_factor: bool = False,
        compare: bool = False,
        post_process: bool = True,
    ) -> Path:
        path = Path(path)
        if results_dir is None:
            results_dir = Path(self.results_dir)
        result = self.get_transformed_image(
            path, render_factor, post_process=post_process
        )
        orig = self._open_pil_image(path)
        if compare:
            self._plot_comparison(
                figsize, render_factor, display_render_factor, orig, result
            )
        else:
            self._plot_solo(figsize, render_factor, display_render_factor, result)

        orig.close()
        result_path = self._save_result_image(path, result, results_dir=results_dir)
        result.close()
        return result_path

    def _plot_comparison(
        self,
        figsize: Tuple[int, int],
        render_factor: int,
        display_render_factor: bool,
        orig: Image,
        result: Image,
    ):
        fig, axes = plt.subplots(1, 2, figsize=figsize)
        self._plot_image(
            orig,
            axes=axes[0],
            figsize=figsize,
            render_factor=render_factor,
            display_render_factor=False,
        )
        self._plot_image(
            result,
            axes=axes[1],
            figsize=figsize,
            render_factor=render_factor,
            display_render_factor=display_render_factor,
        )

    def _plot_solo(
        self,
        figsize: Tuple[int, int],
        render_factor: int,
        display_render_factor: bool,
        result: Image,
    ):
        fig, axes = plt.subplots(1, 1, figsize=figsize)
        self._plot_image(
            result,
            axes=axes,
            figsize=figsize,
            render_factor=render_factor,
            display_render_factor=display_render_factor,
        )

    def _save_result_image(self, source_path: Path, image: Image, results_dir = None) -> Path:
        if results_dir is None:
            results_dir = Path(self.results_dir)
        result_path = results_dir / source_path.name
        image.save(result_path)
        return result_path

    def get_transformed_image(
        self, path: Path, render_factor: int = None, post_process: bool = True,
    ) -> Image:
        self._clean_mem()
        orig_image = self._open_pil_image(path)
        filtered_image = self.filter.filter(
            orig_image, orig_image, render_factor=render_factor,post_process=post_process
        )

        return filtered_image
    
    # 直接從圖片轉(zhuǎn)換
    def get_transformed_image_from_image(
        self, image: Image, render_factor: int = None, post_process: bool = True,
    ) -> Image:
        self._clean_mem()
        orig_image = image
        filtered_image = self.filter.filter(
            orig_image, orig_image, render_factor=render_factor,post_process=post_process
        )

        return filtered_image

    def _plot_image(
        self,
        image: Image,
        render_factor: int,
        axes: Axes = None,
        figsize=(20, 20),
        display_render_factor = False,
    ):
        if axes is None:
            _, axes = plt.subplots(figsize=figsize)
        axes.imshow(np.asarray(image) / 255)
        axes.axis('off')
        if render_factor is not None and display_render_factor:
            plt.text(
                10,
                10,
                'render_factor: ' + str(render_factor),
                color='white',
                backgroundcolor='black',
            )

    def _get_num_rows_columns(self, num_images: int, max_columns: int) -> Tuple[int, int]:
        columns = min(num_images, max_columns)
        rows = num_images // columns
        rows = rows if rows * columns == num_images else rows + 1
        return rows, columns

import os
class VideoColorizer:
    def __init__(self, vis: ModelImageVisualizer,workfolder: Path = None):
        self.vis = vis
        self.workfolder = workfolder
        self.source_folder = os.path.join(self.workfolder, "source")
        self.bwframes_root = os.path.join(self.workfolder, "bwframes")
        self.audio_root = os.path.join(self.workfolder, "audio")
        self.colorframes_root = os.path.join(self.workfolder,  "colorframes")
        self.result_folder = os.path.join(self.workfolder,  "result")

    def _purge_images(self, dir):
        for f in os.listdir(dir):
            if re.search('.*?\.jpg', f):
                os.remove(os.path.join(dir, f))

    def _get_ffmpeg_probe(self, path:Path):
        try:
            probe = ffmpeg.probe(str(path))
            return probe
        except ffmpeg.Error as e:
            logging.error("ffmpeg error: {0}".format(e), exc_info=True)
            logging.error('stdout:' + e.stdout.decode('UTF-8'))
            logging.error('stderr:' + e.stderr.decode('UTF-8'))
            raise e
        except Exception as e:
            logging.error('Failed to instantiate ffmpeg.probe.  Details: {0}'.format(e), exc_info=True)   
            raise e

    def _get_fps(self, source_path: Path) -> str:
        probe = self._get_ffmpeg_probe(source_path)
        stream_data = next(
            (stream for stream in probe['streams'] if stream['codec_type'] == 'video'),
            None,
        )
        return stream_data['avg_frame_rate']

    def _download_video_from_url(self, source_url, source_path: Path):
        if source_path.exists():
            source_path.unlink()

        ydl_opts = {
            'format': 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4',
            'outtmpl': str(source_path),
            'retries': 30,
            'fragment-retries': 30
        }
        with youtube_dl.YoutubeDL(ydl_opts) as ydl:
            ydl.download([source_url])

    def _extract_raw_frames(self, source_path: Path):
        bwframes_folder = os.path.join(self.bwframes_root, source_path.stem)
        bwframes_folder = Path(bwframes_folder)
        bwframe_path_template = os.path.join(str(bwframes_folder), '%5d.jpg')
        bwframes_folder.mkdir(parents=True, exist_ok=True)
        self._purge_images(bwframes_folder)

        process = (
            ffmpeg
                .input(str(source_path))
                .output(str(bwframe_path_template), format='image2', vcodec='mjpeg', **{'q:v':'0'})
                .global_args('-hide_banner')
                .global_args('-nostats')
                .global_args('-loglevel', 'error')
        )

        try:
            process.run()
        except ffmpeg.Error as e:
            logging.error("ffmpeg error: {0}".format(e), exc_info=True)
            logging.error('stdout:' + e.stdout.decode('UTF-8'))
            logging.error('stderr:' + e.stderr.decode('UTF-8'))
            raise e
        except Exception as e:
            logging.error('Errror while extracting raw frames from source video.  Details: {0}'.format(e), exc_info=True)   
            raise e

    def _colorize_raw_frames(
        self, source_path: Path, render_factor: int = None, post_process: bool = True,g_process_bar: gr.Progress = None
    ):
        colorframes_folder = Path(self.colorframes_root) / source_path.stem
        colorframes_folder.mkdir(parents=True, exist_ok=True)
        self._purge_images(colorframes_folder)
        bwframes_folder = Path(self.bwframes_root) / source_path.stem
        p_status = 0
        image_index = 0
        total_images = len(os.listdir(str(bwframes_folder)))
        for img in progress_bar(os.listdir(str(bwframes_folder))):
            img_path = bwframes_folder / img

            image_index += 1
            if g_process_bar is not None:
                p_status = image_index / total_images
                g_process_bar(p_status,"Colorizing...")

            if os.path.isfile(str(img_path)):
                color_image = self.vis.get_transformed_image(
                    str(img_path), render_factor=render_factor, post_process=post_process
                )
                color_image.save(str(colorframes_folder / img))

    def _build_video(self, source_path: Path) -> Path:
        colorized_path = Path(self.result_folder) / source_path.name
        colorframes_folder = Path(self.colorframes_root) / source_path.stem

        colorframes_path_template = str(colorframes_folder / '%5d.jpg')
        colorized_path.parent.mkdir(parents=True, exist_ok=True)
        if colorized_path.exists():
            colorized_path.unlink()
        fps = self._get_fps(source_path)

        process = (
            ffmpeg 
                .input(str(colorframes_path_template), format='image2', vcodec='mjpeg', framerate=fps) 
                .output(str(colorized_path), crf=17, vcodec='libx264')
                .global_args('-hide_banner')
                .global_args('-nostats')
                .global_args('-loglevel', 'error')
        )

        try:
            process.run()
        except ffmpeg.Error as e:
            logging.error("ffmpeg error: {0}".format(e), exc_info=True)
            logging.error('stdout:' + e.stdout.decode('UTF-8'))
            logging.error('stderr:' + e.stderr.decode('UTF-8'))
            raise e
        except Exception as e:
            logging.error('Errror while building output video.  Details: {0}'.format(e), exc_info=True)   
            raise e

        result_path = Path(self.result_folder) / source_path.name
        if result_path.exists():
            result_path.unlink()
        # making copy of non-audio version in case adding back audio doesn't apply or fails.
        shutil.copyfile(str(colorized_path), str(result_path))

        # adding back sound here
        audio_file = Path(str(source_path).replace('.mp4', '.aac'))
        if audio_file.exists():
            audio_file.unlink()

        os.system(
            'ffmpeg -y -i "'
            + str(source_path)
            + '" -vn -acodec copy "'
            + str(audio_file)
            + '"'
            + ' -hide_banner'
            + ' -nostats'
            + ' -loglevel error'
        )

        if audio_file.exists():
            os.system(
                'ffmpeg -y -i "'
                + str(colorized_path)
                + '" -i "'
                + str(audio_file)
                + '" -shortest -c:v copy -c:a aac -b:a 256k "'
                + str(result_path)
                + '"'
                + ' -hide_banner'
                + ' -nostats'
                + ' -loglevel error'
            )
        logging.info('Video created here: ' + str(result_path))
        return result_path

    def colorize_from_url(
        self,
        source_url,
        file_name: str,
        render_factor: int = None,
        post_process: bool = True,

    ) -> Path:
        source_path = Path(self.source_folder) / file_name

        self._download_video_from_url(source_url, source_path)
        return self._colorize_from_path(
            source_path, render_factor=render_factor, post_process=post_process
        )

    def colorize_from_file_name(
        self, file_name: str, render_factor: int = None, post_process: bool = True,g_process_bar: gr.Progress = None
    ) -> Path:
        source_path = Path(self.source_folder) / file_name
        return self._colorize_from_path(
            source_path, render_factor=render_factor,  post_process=post_process,g_process_bar=g_process_bar
        )

    def _colorize_from_path(
        self, source_path: Path, render_factor: int = None, post_process: bool = True,g_process_bar: gr.Progress = None
    ) -> Path:
        if not source_path.exists():
            raise Exception(
                'Video at path specfied, ' + str(source_path) + ' could not be found.'
            )
        g_process_bar(0,"Extracting frames...")
        self._extract_raw_frames(source_path)
        self._colorize_raw_frames(
            source_path, render_factor=render_factor,post_process=post_process,g_process_bar=g_process_bar
        )
        return self._build_video(source_path)


def get_video_colorizer(render_factor: int = 21,workfolder:str = "./video") -> VideoColorizer:
    return get_stable_video_colorizer(render_factor=render_factor,workfolder=workfolder)


def get_artistic_video_colorizer(
    root_folder: Path = Path('./'),
    weights_name: str = 'ColorizeArtistic_gen',
    results_dir='result_images',
    render_factor: int = 35
) -> VideoColorizer:
    learn = gen_inference_deep(root_folder=root_folder, weights_name=weights_name)
    filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor)
    vis = ModelImageVisualizer(filtr, results_dir=results_dir)
    return VideoColorizer(vis)


def get_stable_video_colorizer(
    root_folder: Path = Path('./'),
    weights_name: str = 'ColorizeVideo_gen',
    results_dir='result_images',
    render_factor: int = 21,
    workfolder:str = "./video"
) -> VideoColorizer:
    learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name)
    filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor)
    vis = ModelImageVisualizer(filtr, results_dir=results_dir)
    return VideoColorizer(vis,workfolder=workfolder)


def get_image_colorizer(
    root_folder: Path = Path('./'), render_factor: int = 35, artistic: bool = True
) -> ModelImageVisualizer:
    if artistic:
        return get_artistic_image_colorizer(root_folder=root_folder, render_factor=render_factor)
    else:
        return get_stable_image_colorizer(root_folder=root_folder, render_factor=render_factor)


def get_stable_image_colorizer(
    root_folder: Path = Path('./'),
    weights_name: str = 'ColorizeStable_gen',
    results_dir='result_images',
    render_factor: int = 35
) -> ModelImageVisualizer:
    learn = gen_inference_wide(root_folder=root_folder, weights_name=weights_name)
    filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor)
    vis = ModelImageVisualizer(filtr, results_dir=results_dir)
    return vis


def get_artistic_image_colorizer(
    root_folder: Path = Path('./'),
    weights_name: str = 'ColorizeArtistic_gen',
    results_dir='result_images',
    render_factor: int = 35
) -> ModelImageVisualizer:
    learn = gen_inference_deep(root_folder=root_folder, weights_name=weights_name)
    filtr = MasterFilter([ColorizerFilter(learn=learn)], render_factor=render_factor)
    vis = ModelImageVisualizer(filtr, results_dir=results_dir)
    return vis

修改好后,代碼便可順利運(yùn)行。

3.運(yùn)行和后續(xù)視頻處理

我們可以發(fā)現(xiàn)Deolidfy是將視頻處理為一幀一幀的jpg文件再進(jìn)行顏色恢復(fù),然而呢,這個(gè)代碼,貌似并不能自動(dòng)的將圖片合成為視頻,在運(yùn)行完成后會(huì)出現(xiàn)如下的問(wèn)題:使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

雖然壞消息是不能直接一套代碼合成為視頻,但是好消息是其處理完成后的圖片會(huì)自動(dòng)生成文件夾報(bào)存下來(lái),保存的文件夾為"E:\***\***\colorframes\video"

使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

而未被處理的圖像幀會(huì)報(bào)存在"E:\***\***\bwframes\video"

使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github

我們可以通過(guò)一個(gè)代碼將處理出來(lái)的圖片們?nèi)诤蠟橐曨l然后截出原來(lái)視頻的音軌將其融合為一個(gè)視頻就可以了。實(shí)現(xiàn)這個(gè)融合功能的代碼如下:

from moviepy.editor import VideoFileClip, ImageSequenceClip
import os
# 設(shè)置文件夾路徑和視頻源路徑
folder_path = 'E:/yolo/pythonProject/colorframes/video'
video_source_path = 'E:/yolo/pythonProject/source/video.mp4'

# 從文件夾中的圖像序列創(chuàng)建視頻剪輯
image_files = [f for f in os.listdir(folder_path) if f.endswith('.jpg')]
image_files.sort()
clip = ImageSequenceClip([os.path.join(folder_path, img) for img in image_files], fps=30)

# 從視頻源文件中提取音頻剪輯
video_clip = VideoFileClip(video_source_path)
audio_clip = video_clip.audio

# 將圖像視頻剪輯與音頻剪輯合成為最終視頻剪輯
final_clip = clip.set_audio(audio_clip)

# 保存最終視頻剪輯
final_clip.write_videofile('output_with_audio.mp4', codec='libx264', audio_codec='aac')

總結(jié):

將處理好的視頻進(jìn)行對(duì)比,不難發(fā)現(xiàn)deoldify模型依舊具有很強(qiáng)的復(fù)原能力,但是仍然會(huì)有一些地方出現(xiàn)原片中黃色的斑駁,而且主要出現(xiàn)于人類的臉部和視頻中環(huán)境較為復(fù)雜的地方,如果在訓(xùn)練的時(shí)候多加入這類的圖片,也許模型可能還能再有提升的空間。

效果對(duì)比:使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案),Deoldify,影像修復(fù),音視頻,python,深度學(xué)習(xí),開(kāi)源,github文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-861778.html

到了這里,關(guān)于使用Deoldify模型來(lái)對(duì)視頻影像復(fù)原(對(duì)運(yùn)行時(shí)遇到的問(wèn)題的做出的解決方案)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包