2018年做返利網(wǎng)站網(wǎng)站功能優(yōu)化
文章目錄
- 📚輸入及輸出
- 📚代碼實(shí)現(xiàn)
📚輸入及輸出
-
輸入:讀取一個(gè)
input.txt
,其中包含單詞及其對(duì)應(yīng)的TED打卡號(hào)。
-
輸出
-
output.txt
:包含按頻率降序排列的每個(gè)單詞及其計(jì)數(shù)(這里直接用于后續(xù)的詞云圖生成)。
-
output_word.json
:包含每個(gè)單詞及其計(jì)數(shù),以及與之關(guān)聯(lián)的TED打卡號(hào)列表,生成一個(gè)json文件(按字母序排列,用于后續(xù)網(wǎng)頁(yè)數(shù)據(jù)導(dǎo)入)。
-
output2.txt
:按字母順序排序的所有單詞,即導(dǎo)出一個(gè)單詞詞表(可以導(dǎo)入到不背單詞里生成自定義詞表)。
-
word_count.txt
:記錄截至每篇TED打卡號(hào)時(shí)涉及到的單詞總數(shù)(該數(shù)據(jù)用于繪制后續(xù)的折線圖)。
-
-
生成詞云:在處理數(shù)據(jù)后,腳本讀取
output.txt
并生成基于單詞頻率的詞云,并將詞云保存至指定目錄。
📚代碼實(shí)現(xiàn)
-
邏輯梳理
- 在函數(shù)中使用了兩個(gè)defaultdict,一個(gè)用于統(tǒng)計(jì)單詞出現(xiàn)的頻率,另一個(gè)用于記錄單詞對(duì)應(yīng)的打卡號(hào)集合。
- 打開(kāi)輸入文件,并逐行讀取單詞及其對(duì)應(yīng)的打卡號(hào),對(duì)于每個(gè)單詞,統(tǒng)計(jì)其出現(xiàn)的頻率,并將打卡號(hào)添加到對(duì)應(yīng)的集合中。同時(shí),對(duì)每篇TED的打卡號(hào)進(jìn)行統(tǒng)計(jì),記錄每篇 TED 結(jié)束時(shí)涉及到的當(dāng)前單詞總數(shù)量,寫入
output_word_count_txt
,對(duì)應(yīng)word_count.txt
。 - 統(tǒng)計(jì)完所有單詞后,對(duì)單詞頻率進(jìn)行排序,并將排序后的結(jié)果寫入
output_txt_file
,對(duì)應(yīng)input.txt
。 - 將單詞、頻率和相應(yīng)的(排序過(guò)后的)打卡號(hào)列表存儲(chǔ)為 JSON 文件,對(duì)應(yīng)
output_word.json
。 - 將所有單詞按字母順序?qū)懭?code>output_txt_file_sorted中,對(duì)應(yīng)
output2.txt
。
-
具體詳見(jiàn)注釋↓
import json from collections import defaultdict from wordcloud import WordCloud import matplotlib.pyplot as plt import redef count_word_frequency(input_file, output_txt_file, output_word_json_file, output_txt_file_sorted, output_word_count_txt):# 使用defaultdict初始化兩個(gè)字典,用于統(tǒng)計(jì)單詞出現(xiàn)頻率、單詞對(duì)應(yīng)打卡號(hào)集合word_count = defaultdict(int)# 設(shè)置為set集合自動(dòng)去重,單詞對(duì)應(yīng)的打卡號(hào)集合word_numbers = defaultdict(set) current_number = 0 # 當(dāng)前打卡號(hào)初始化為0# 創(chuàng)建一個(gè)空的單詞計(jì)數(shù)分析文本文件open(output_word_count_txt, 'w').close()# 打開(kāi)輸入文件并逐行讀取單詞及其對(duì)應(yīng)的數(shù)字with open(input_file, 'r') as file:for line in file:line_parts = line.strip().split()word = " ".join(line_parts[:-1]) # 提取單詞number = int(line_parts[-1]) # 提取打卡號(hào)# 如果當(dāng)前打卡號(hào)與前一個(gè)不同(即已經(jīng)開(kāi)始下一篇了),記錄前一個(gè)打卡號(hào)(即剛剛完成的那一篇)對(duì)應(yīng)的(截至該篇的)單詞總數(shù)到output_word_count_txt中if number != current_number:current_number = number# 用sum函數(shù)來(lái)統(tǒng)計(jì)word_numbers中非空集合的數(shù)量,即當(dāng)前TED打卡號(hào)下已經(jīng)出現(xiàn)過(guò)的單詞數(shù)current_unique_count = sum(1 for word_set in word_numbers.values() if len(word_set) > 0)with open(output_word_count_txt, 'a') as count_file:count_file.write(f"{current_number-1} {current_unique_count}\n")# 統(tǒng)計(jì)單詞的頻率及相應(yīng)的打卡號(hào)(這里排除了同一個(gè)單詞在一片篇TED里多次記錄的重復(fù)計(jì)數(shù)情況)if number not in word_numbers[word]: word_count[word] += 1word_numbers[word].add(number) # 對(duì)每個(gè)單詞的打卡號(hào)進(jìn)行排序,使得最后TED打卡號(hào)列表按序顯示for word in word_numbers:word_numbers[word] = sorted(word_numbers[word])# 補(bǔ)充記錄最后一個(gè)打卡號(hào)對(duì)應(yīng)的(截至該篇的)單詞總數(shù)到output_word_count_txt中current_unique_count = sum(1 for word_set in word_numbers.values() if len(word_set) > 0)with open(output_word_count_txt, 'a') as count_file:count_file.write(f"{current_number} {current_unique_count}\n")# 對(duì)單詞頻率進(jìn)行排序,并將排序后的結(jié)果寫入輸出文本文件中sorted_words = sorted(word_count.items(), key=lambda x: (-x[1], x[0]))with open(output_txt_file, 'w') as file_txt:for word, count in sorted_words:file_txt.write(word + " " + str(count) + "\n")# 將單詞、頻率和相應(yīng)的打卡號(hào)列表存儲(chǔ)為JSON文件word_data = []for word, count in word_count.items():word_entry = {"word": word,"count": count,"numbers": list(word_numbers[word]) }word_data.append(word_entry)word_data_sorted = sorted(word_data, key=lambda x: x["word"])with open(output_word_json_file, 'w') as file_word_json:json.dump(word_data_sorted, file_word_json, indent=4)# 將所有單詞按字母順序?qū)懭胼敵鑫谋疚募?/span>all_words = list(word_count.keys())all_words.sort()with open(output_txt_file_sorted, 'w') as file_txt_sorted:file_txt_sorted.write('\n'.join(all_words) + '\n')# 定義輸入文件和輸出文件的名稱 input_file = "input.txt" output_txt_file = "output.txt" output_word_json_file = "output_word.json" output_txt_file_sorted = "output2.txt" output_word_count_txt = "word_count.txt"# 調(diào)用函數(shù)統(tǒng)計(jì)單詞頻率并生成相關(guān)輸出 count_word_frequency(input_file, output_txt_file, output_word_json_file, output_txt_file_sorted, output_word_count_txt)# 讀取輸出文本文件的單詞頻率數(shù)據(jù) words = [] with open('output.txt', 'r', encoding='utf-8') as file:for line in file:# 使用正則表達(dá)式匹配每行的單詞和對(duì)應(yīng)的頻率match = re.match(r'(.+?)\s+(\d+)', line)if match: # 如果匹配成功word = match.group(1) # 提取匹配到的單詞部分freq = int(match.group(2)) # 提取匹配到的數(shù)字部分作為頻率words.append((word, freq)) # 將單詞和對(duì)應(yīng)的頻率以元組的形式添加到列表中# 生成詞云圖像并保存為文件 wordcloud = WordCloud(width=800, height=400, background_color='white').generate_from_frequencies(dict(words)) plt.figure(figsize=(10, 6)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis('off') wordcloud.to_file('./images/wordcloud.png') plt.show()