0 前言
?? 這兩年開(kāi)始畢業(yè)設(shè)計(jì)和畢業(yè)答辯的要求和難度不斷提升,傳統(tǒng)的畢設(shè)題目缺少創(chuàng)新和亮點(diǎn),往往達(dá)不到畢業(yè)答辯的要求,這兩年不斷有學(xué)弟學(xué)妹告訴學(xué)長(zhǎng)自己做的項(xiàng)目系統(tǒng)達(dá)不到老師的要求。
為了大家能夠順利以及最少的精力通過(guò)畢設(shè),學(xué)長(zhǎng)分享優(yōu)質(zhì)畢業(yè)設(shè)計(jì)項(xiàng)目,今天要分享的是
?? 基于大數(shù)據(jù)的招聘與租房分析可視化系統(tǒng)
??學(xué)長(zhǎng)這里給一個(gè)題目綜合評(píng)分(每項(xiàng)滿分5分)
- 難度系數(shù):3分
- 工作量:4分
- 創(chuàng)新點(diǎn):5分
1 課題項(xiàng)目介紹
學(xué)長(zhǎng)設(shè)計(jì)的本項(xiàng)目利用 python 網(wǎng)絡(luò)爬蟲(chóng)抓取常見(jiàn)招聘網(wǎng)站和租房網(wǎng)站的租房信息,完成數(shù)據(jù)清洗和結(jié)構(gòu)化,存儲(chǔ)到數(shù)據(jù)庫(kù)中,搭建web系統(tǒng)對(duì)招聘信息的薪資、待遇和租房的地區(qū)、朝向、價(jià)格影響因素進(jìn)行統(tǒng)計(jì)分析并可視化展示。
2 相關(guān)技術(shù)介紹
2.1 爬蟲(chóng)
網(wǎng)絡(luò)爬蟲(chóng)是一種按照一定的規(guī)則,自動(dòng)地抓取萬(wàn)維網(wǎng)信息的程序或者腳本。爬蟲(chóng)對(duì)某一站點(diǎn)訪問(wèn),如果可以訪問(wèn)就下載其中的網(wǎng)頁(yè)內(nèi)容,并且通過(guò)爬蟲(chóng)解析模塊解析得到的網(wǎng)頁(yè)鏈接,把這些鏈接作為之后的抓取目標(biāo),并且在整個(gè)過(guò)程中完全不依賴用戶,自動(dòng)運(yùn)行。若不能訪問(wèn)則根據(jù)爬蟲(chóng)預(yù)先設(shè)定的策略進(jìn)行下一個(gè) URL的訪問(wèn)。在整個(gè)過(guò)程中爬蟲(chóng)會(huì)自動(dòng)進(jìn)行異步處理數(shù)據(jù)請(qǐng)求,返回網(wǎng)頁(yè)的抓取數(shù)據(jù)。在整個(gè)的爬蟲(chóng)運(yùn)行之前,用戶都可以自定義的添加代理,偽 裝 請(qǐng)求頭以便更好地獲取網(wǎng)頁(yè)數(shù)據(jù)。爬蟲(chóng)流程圖如下:
2.2 Ajax技術(shù)
Ajax 是一種獨(dú)立于 Web 服務(wù)器軟件的瀏覽器技術(shù)。
Ajax使用 JavaScript 向服務(wù)器提出請(qǐng)求并處理響應(yīng)而不阻塞的用戶核心對(duì)象XMLHttpRequest。通過(guò)這個(gè)對(duì)象,您的 JavaScript 可在不重載頁(yè)面的情況與 Web 服務(wù)器交換數(shù)據(jù),即在不需要刷新頁(yè)面的情況下,就可以產(chǎn)生局部刷新的效果。
前端將需要的參數(shù)轉(zhuǎn)化為JSON字符串,再通過(guò)get/post方式向服務(wù)器發(fā)送一個(gè)請(qǐng)并將參數(shù)直接傳遞給后臺(tái),后臺(tái)對(duì)前端請(qǐng)求做出反應(yīng),接收數(shù)據(jù),將數(shù)據(jù)作為條件查詢,但會(huì)j’son字符串格式的查詢結(jié)果集給前端,前端接收到后臺(tái)返回的數(shù)據(jù)進(jìn)行條件判斷并作出相應(yīng)的頁(yè)面展示。
$.ajax({
url: 'http://127.0.0.1:5000/updatePass',
type: "POST",
data:JSON.stringify(data.field),
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(res) {
if (res.code == 200) {
layer.msg(res.msg, {icon: 1});
} else {
layer.msg(res.msg, {icon: 2});
}
}
})
3 Echarts
ECharts(Enterprise Charts)是百度開(kāi)源的數(shù)據(jù)可視化工具,底層依賴輕量級(jí)Canvas庫(kù)ZRender。兼容了幾乎全部常用瀏覽器的特點(diǎn),使它可廣泛用于PC客戶端和手機(jī)客戶端。ECharts能輔助開(kāi)發(fā)者整合用戶數(shù)據(jù),創(chuàng)新性的完成個(gè)性化設(shè)置可視化圖表。支持折線圖(區(qū)域圖)、柱狀圖(條狀圖)、散點(diǎn)圖(氣泡圖)、K線圖、餅圖(環(huán)形圖)等,通過(guò)導(dǎo)入 js 庫(kù)在 Java Web 項(xiàng)目上運(yùn)行。
4 數(shù)據(jù)獲取
我們利用 python 的 request + beautifulsoup 從某拉勾網(wǎng)和鏈家等平臺(tái)抓取了九個(gè)城市的招聘和租房數(shù)據(jù)。
4.1 總體流程如下
4.2 獲取招聘數(shù)據(jù)
因?yàn)槔淳W(wǎng)具有較強(qiáng)的反爬機(jī)制,使用user-agent和cookies封裝頭部信息,將爬蟲(chóng)程序偽裝成瀏覽器訪問(wèn)網(wǎng)頁(yè),通過(guò)request包post方法進(jìn)行url請(qǐng)求,請(qǐng)求成功返回json格式字符串,并使用字典方法直接讀取數(shù)據(jù),即可拿到我們想要的python職位相關(guān)的信息,可以通過(guò)讀取總職位數(shù),通過(guò)總的職位數(shù)和每頁(yè)能顯示的職位數(shù),我們可以計(jì)算出總共有多少頁(yè),然后使用循環(huán)按頁(yè)爬取,最后將職位信息匯總,寫(xiě)入到CSV格式的文件以及本地的mysql數(shù)據(jù)庫(kù)中。
import requests
import math
import time
import pandas as pd
import pymysql
from sqlalchemy import create_engine
def get_json(url, num):
"""
從指定的url中通過(guò)requests請(qǐng)求攜帶請(qǐng)求頭和請(qǐng)求體獲取網(wǎng)頁(yè)中的信息,
:return:
"""
url1 = 'https://www.lagou.com/jobs/list_python/p-city_0?&cl=false&fromSearch=true&labelWords=&suginput='
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Safari/537.36',
'Host': 'www.lagou.com',
'Referer': 'https://www.lagou.com/jobs/list_%E6%95%B0%E6%8D%AE%E5%88%86%E6%9E%90?labelWords=&fromSearch=true&suginput=',
'X-Anit-Forge-Code': '0',
'X-Anit-Forge-Token': 'None',
'X-Requested-With': 'XMLHttpRequest',
'Cookie':'user_trace_token=20210218203227-35e936a1-f40f-410d-8400-b87f9fb4be0f; _ga=GA1.2.331665492.1613651550; LGUID=20210218203230-39948353-de3f-4545-aa01-43d147708c69; LG_HAS_LOGIN=1; hasDeliver=0; privacyPolicyPopup=false; showExpriedIndex=1; showExpriedCompanyHome=1; showExpriedMyPublish=1; RECOMMEND_TIP=true; index_location_city=%E5%85%A8%E5%9B%BD; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1613651550,1613652253,1613806244,1614497914; _putrc=52ABCFBE36E5D0BD123F89F2B170EADC; gate_login_token=ea312e017beac7fe72547a32956420b07d6d5b1816bc766035dd0f325ba92b91; JSESSIONID=ABAAAECAAEBABII8D8278DB16CB050FD656DD1816247B43; login=true; unick=%E7%94%A8%E6%88%B72933; WEBTJ-ID=20210228%E4%B8%8B%E5%8D%883:38:37153837-177e7932b7f618-05a12d1b3d5e8c-53e356a-1296000-177e7932b8071; sensorsdata2015session=%7B%7D; _gid=GA1.2.1359196614.1614497918; __lg_stoken__=bb184dd5d959320e9e61d943e802ac98a8538d44699751621e807e93fe0ffea4c1a57e923c71c93a13c90e5abda7a51873c2e488a4b9d76e67e0533fe9e14020734016c0dcf2; X_MIDDLE_TOKEN=90b85c3630b92280c3ad7a96c881482e; LGSID=20210228161834-659d6267-94a3-4a5c-9857-aaea0d5ae2ed; TG-TRACK-CODE=index_navigation; SEARCH_ID=092c1fd19be24d7cafb501684c482047; X_HTTP_TOKEN=fdb10b04b25b767756070541617f658231fd72d78b; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2220600756%22%2C%22first_id%22%3A%22177b521c02a552-08c4a0f886d188-73e356b-1296000-177b521c02b467%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22Linux%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2288.0.4324.190%22%2C%22lagou_company_id%22%3A%22%22%7D%2C%22%24device_id%22%3A%22177b521c02a552-08c4a0f886d188-73e356b-1296000-177b521c02b467%22%7D; _gat=1; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1614507066; LGRID=20210228181106-f2d71d85-74fe-4b43-b87e-d78a33c872ad'
}
data = {
'first': 'true',
'pn': num,
'kd': 'BI工程師'}
#得到Cookies信息
s = requests.Session()
print('建立session:', s, '\n\n')
s.get(url=url1, headers=headers, timeout=3)
cookie = s.cookies
print('獲取cookie:', cookie, '\n\n')
#添加請(qǐng)求參數(shù)以及headers、Cookies等信息進(jìn)行url請(qǐng)求
res = requests.post(url, headers=headers, data=data, cookies=cookie, timeout=3)
res.raise_for_status()
res.encoding = 'utf-8'
page_data = res.json()
print('請(qǐng)求響應(yīng)結(jié)果:', page_data, '\n\n')
return page_data
def get_page_num(count):
"""
計(jì)算要抓取的頁(yè)數(shù),通過(guò)在拉勾網(wǎng)輸入關(guān)鍵字信息,可以發(fā)現(xiàn)最多顯示30頁(yè)信息,每頁(yè)最多顯示15個(gè)職位信息
:return:
"""
page_num = math.ceil(count / 15)
if page_num > 29:
return 29
else:
return page_num
def get_page_info(jobs_list):
"""
獲取職位
:param jobs_list:
:return:
"""
page_info_list = []
for i in jobs_list: # 循環(huán)每一頁(yè)所有職位信息
job_info = []
job_info.append(i['companyFullName'])
job_info.append(i['companyShortName'])
job_info.append(i['companySize'])
job_info.append(i['financeStage'])
job_info.append(i['district'])
job_info.append(i['positionName'])
job_info.append(i['workYear'])
job_info.append(i['education'])
job_info.append(i['salary'])
job_info.append(i['positionAdvantage'])
job_info.append(i['industryField'])
job_info.append(i['firstType'])
job_info.append(",".join(i['companyLabelList']))
job_info.append(i['secondType'])
job_info.append(i['city'])
page_info_list.append(job_info)
return page_info_list
def unique(old_list):
newList = []
for x in old_list:
if x not in newList :
newList.append(x)
return newList
def main():
connect_info = 'mysql+pymysql://{}:{}@{}:{}/{}?charset=utf8'.format("root", "123456", "localhost", "3306",
"20_lagou")
engine = create_engine(connect_info)
url = ' https://www.lagou.com/jobs/positionAjax.json?needAddtionalResult=false'
first_page = get_json(url, 1)
total_page_count = first_page['content']['positionResult']['totalCount']
num = get_page_num(total_page_count)
total_info = []
time.sleep(10)
for num in range(1, num + 1):
# 獲取每一頁(yè)的職位相關(guān)的信息
page_data = get_json(url, num) # 獲取響應(yīng)json
jobs_list = page_data['content']['positionResult']['result'] # 獲取每頁(yè)的所有python相關(guān)的職位信息
page_info = get_page_info(jobs_list)
total_info += page_info
print('已經(jīng)爬取到第{}頁(yè),職位總數(shù)為{}'.format(num, len(total_info)))
time.sleep(20)
#將總數(shù)據(jù)轉(zhuǎn)化為data frame再輸出,然后在寫(xiě)入到csv格式的文件中以及本地?cái)?shù)據(jù)庫(kù)中
df = pd.DataFrame(data=unique(total_info),
columns=['companyFullName', 'companyShortName', 'companySize', 'financeStage',
'district', 'positionName', 'workYear', 'education',
'salary', 'positionAdvantage', 'industryField',
'firstType', 'companyLabelList', 'secondType', 'city'])
df.to_csv('bi.csv', index=True)
print('職位信息已保存本地')
df.to_sql(name='demo', con=engine, if_exists='append', index=False)
print('職位信息已保存數(shù)據(jù)庫(kù)')
4.3 獲取租房房源信息
import requests
from pyquery import PyQuery as pq
from fake_useragent import UserAgent
import time
import pandas as pd
import random
import pymysql
from sqlalchemy import create_engine
UA = UserAgent()
headers = {
'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
'Cookie': 'lianjia_uuid=6383a9ce-19b9-47af-82fb-e8ec386eb872; UM_distinctid=1777521dc541e1-09601796872657-53e3566-13c680-1777521dc5547a; _smt_uid=601dfc61.4fcfbc4b; _ga=GA1.2.894053512.1612577894; _jzqc=1; _jzqckmp=1; _gid=GA1.2.1480435812.1614959594; Hm_lvt_9152f8221cb6243a53c83b956842be8a=1614049202,1614959743; csrfSecret=lqKM3_19PiKkYOfJSv6ldr_c; activity_ke_com=undefined; ljisid=6383a9ce-19b9-47af-82fb-e8ec386eb872; select_nation=1; crosSdkDT2019DeviceId=-kkiavn-2dq4ie-j9ekagryvmo7rd3-qjvjm0hxo; Hm_lpvt_9152f8221cb6243a53c83b956842be8a=1615004691; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%221777521e37421a-0e1d8d530671de-53e3566-1296000-1777521e375321%22%2C%22%24device_id%22%3A%221777521e37421a-0e1d8d530671de-53e3566-1296000-1777521e375321%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E8%87%AA%E7%84%B6%E6%90%9C%E7%B4%A2%E6%B5%81%E9%87%8F%22%2C%22%24latest_referrer%22%3A%22https%3A%2F%2Fwww.baidu.com%2Flink%22%2C%22%24latest_referrer_host%22%3A%22www.baidu.com%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC%22%2C%22%24latest_utm_source%22%3A%22guanwang%22%2C%22%24latest_utm_medium%22%3A%22pinzhuan%22%2C%22%24latest_utm_campaign%22%3A%22wybeijing%22%2C%22%24latest_utm_content%22%3A%22biaotimiaoshu%22%2C%22%24latest_utm_term%22%3A%22biaoti%22%7D%7D; lianjia_ssid=7a179929-0f9a-40a4-9537-d1ddc5164864; _jzqa=1.3310829580005876700.1612577889.1615003848.1615013370.6; _jzqy=1.1612577889.1615013370.2.jzqsr=baidu|jzqct=%E9%93%BE%E5%AE%B6.jzqsr=baidu; select_city=440300; srcid=eyJ0Ijoie1wiZGF0YVwiOlwiZjdiNTI1Yjk4YjI3MGNhNjRjMGMzOWZkNDc4NjE4MWJkZjVjNTZiMWYxYTM4ZTJkNzMxN2I0Njc1MDEyY2FiOWMzNTIzZTE1ZjEyZTE3NjlkNTRkMTA2MWExZmIzMWM5YzQ3ZmQxM2M3NTM5YTQ1YzM5OWU0N2IyMmFjM2ZhZmExOGU3ZTc1YWU0NDQ4NTdjY2RiMjEwNTQyMDQzM2JiM2UxZDQwZWQwNzZjMWQ4OTRlMGRkNzdmYjExZDQwZTExNTg5NTFkODIxNWQzMzdmZTA4YmYyOTFhNWQ2OWQ1OWM4ZmFlNjc0OTQzYjA3NDBjNjNlNDYyNTZiOWNhZmM4ZDZlMDdhNzdlMTY1NmM0ZmM4ZGI4ZGNlZjg2OTE2MmU4M2MwYThhNTljMGNkODYxYjliNGYwNGM0NzJhNGM3MmVmZDUwMTJmNmEwZWMwZjBhMzBjNWE2OWFjNzEzMzM4M1wiLFwia2V5X2lkXCI6XCIxXCIsXCJzaWduXCI6XCJhYWEyMjhiNVwifSIsInIiOiJodHRwczovL20ubGlhbmppYS5jb20vY2h1enUvc3ovenVmYW5nL3BnJTdCJTdELyIsIm9zIjoid2ViIiwidiI6IjAuMSJ9',
'Host': 'sz.lianjia.com',
'Referer': 'https://sz.lianjia.com/zufang/',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36',
}
num_page = 2
class Lianjia_Crawer:
def __init__(self, txt_path):
super(Lianjia_Crawer, self).__init__()
self.file = str(txt_path)
self.df = pd.DataFrame(
columns=['title', 'district', 'area', 'orient', 'floor', 'price', 'city'])
def run(self):
'''啟動(dòng)腳本'''
connect_info = 'mysql+pymysql://{}:{}@{}:{}/{}?charset=utf8'.format(
"root", "123456", "localhost", "3366", "lagou")
engine = create_engine(connect_info)
for i in range(100):
url = "https://sz.lianjia.com/zufang/pg{}/".format(str(i))
self.parse_url(url)
time.sleep(random.randint(2, 5))
print('正在爬取的 url 為 {}'.format(url))
print('爬取完畢?。。。。。。。。。。。。?!')
self.df.to_csv(self.file, encoding='utf-8')
print('租房信息已保存至本地')
self.df.to_sql(name='house', con=engine,
if_exists='append', index=False)
print('租房信息已保存數(shù)據(jù)庫(kù)')
def parse_url(self, url):
headers['User-Agent'] = UA.chrome
res = requests.get(url, headers=headers)
# 聲明pq對(duì)象
doc = pq(res.text)
for i in doc('.content__list--item .content__list--item--main'):
try:
pq_i = pq(i)
# 房屋標(biāo)題
title = pq_i('.content__list--item--title a').text()
# 具體信息
houseinfo = pq_i('.content__list--item--des').text()
# 行政區(qū)
address = str(houseinfo).split('/')[0]
district = str(address).split('-')[0]
# 房屋面積
full_area = str(houseinfo).split('/')[1]
area = str(full_area)[:-1]
# 朝向
orient = str(houseinfo).split('/')[2]
# 樓層
floor = str(houseinfo).split('/')[-1]
# 價(jià)格
price = pq_i('.content__list--item-price').text()
# 城市
city = '深圳'
data_dict = {'title': title, 'district': district, 'area': area,
'orient': orient, 'floor': floor, 'price': price, 'city': city}
self.df = self.df.append(data_dict, ignore_index=True)
print([title, district, area, orient, floor, price, city])
except Exception as e:
print(e)
print("索引提取失敗,請(qǐng)重試?。。。。。。。。。。。?!")
if __name__ == "__main__":
# txt_path = "zufang_shenzhen.csv"
txt_path = "test.csv"
Crawer = Lianjia_Crawer(txt_path)
Crawer.run() # 啟動(dòng)爬蟲(chóng)腳本
爬取過(guò)程
將爬取到的數(shù)據(jù)寫(xiě)入到CSV格式的文件以及本地的mysql數(shù)據(jù)庫(kù)中
其中數(shù)據(jù)庫(kù)建三張表分別保存用戶、租房、招聘信息數(shù)據(jù)
連接數(shù)據(jù)庫(kù)操作
connect_info = 'mysql+pymysql://{}:{}@{}:{}/{}?charset=utf8'.format("root", "123456", "localhost", "3306","my_db")
engine = create_engine(connect_info)
5 數(shù)據(jù)可視化
使用echarts可視化展示
創(chuàng)建如下目錄結(jié)構(gòu):
js 目錄中為 echarts的 js 文件,大家可以在 echarts官網(wǎng)下載自己需要的版本,index.html 文件內(nèi)容如下
#部分代碼
!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
<link href="./assets/images/logo.png" rel="icon">
<title>畢業(yè)生の招聘+租房數(shù)據(jù)可視化系統(tǒng)</title>
<link rel="stylesheet" href="./assets/libs/layui/css/layui.css"/>
<link rel="stylesheet" href="./assets/module/admin.css?v=315"/>
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
<script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body class="layui-layout-body">
<div class="layui-layout layui-layout-admin">
<!-- 頭部 -->
<div class="layui-header">
<div class="layui-logo">
<img src="./assets/images/logo.png"/>
<cite> 畢業(yè)生の數(shù)據(jù)可視化</cite>
</div>
<ul class="layui-nav layui-layout-left">
<li class="layui-nav-item" lay-unselect>
<a ew-event="flexible" title="側(cè)邊伸縮"><i class="layui-icon layui-icon-shrink-right"></i></a>
</li>
<li class="layui-nav-item" lay-unselect>
<a ew-event="refresh" title="刷新"><i class="layui-icon layui-icon-refresh-3"></i></a>
</li>
</ul>
<ul class="layui-nav layui-layout-right">
<li class="layui-nav-item" lay-unselect>
<a ew-event="message" title="消息">
<i class="layui-icon layui-icon-notice"></i>
<span class="layui-badge-dot"></span>
</a>
</li>
<li class="layui-nav-item" lay-unselect>
<a ew-event="note" title="便簽"><i class="layui-icon layui-icon-note"></i></a>
</li>
<li class="layui-nav-item layui-hide-xs" lay-unselect>
<a ew-event="fullScreen" title="全屏"><i class="layui-icon layui-icon-screen-full"></i></a>
</li>
<li class="layui-nav-item layui-hide-xs" lay-unselect>
<a ew-event="lockScreen" title="鎖屏"><i class="layui-icon layui-icon-password"></i></a>
</li>
<li class="layui-nav-item" lay-unselect>
<a>
<img src="assets/images/head.png" class="layui-nav-img">
<cite>zz</cite>
</a>
<dl class="layui-nav-child">
<dd lay-unselect>
<a ew-href="page/template/user-info.html">個(gè)人中心</a>
</dd>
<dd lay-unselect>
<a ew-event="psw">修改密碼</a>
</dd>
<hr>
<dd lay-unselect>
<a ew-event="logout" data-url="page/template/login.html">退出</a>
</dd>
</dl>
</li>
<li class="layui-nav-item" lay-unselect>
<a ew-event="theme" title="主題"><i class="layui-icon layui-icon-more-vertical"></i></a>
</li>
</ul>
</div>
6 實(shí)現(xiàn)效果
6.1 招聘數(shù)據(jù)和租房數(shù)據(jù)概況
可根據(jù)學(xué)習(xí)和職位進(jìn)行篩選查詢
6.2 個(gè)人中心
可修改基本信息,密碼等
6.3 招聘信息可視化
6.4 招聘信息城市之間對(duì)比圖
6.5 租房數(shù)據(jù)可視化
文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-718366.html
6.6 薪資預(yù)測(cè)
文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-718366.html
7 最后
到了這里,關(guān)于大數(shù)據(jù)畢設(shè)項(xiàng)目 大數(shù)據(jù)招聘與租房數(shù)據(jù)分析可視化系統(tǒng) - python的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!