FrontPage

平井の研究会 

本論_ポスター 

ファイル 

平井のゼミ 

&ref(): File not found: "BB8.gif" at page "平井"; &ref(): File not found: "B_yoda.gif" at page "平井";

B4 後期授業スケジュール 

月曜日火曜日水曜日木曜日金曜日
1-2
3-4
5-6
7-8卒業研究1
9-10
11-12

研究日誌 

<< 2026.1 >>
[平井_backup]
        1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
平井_backup/2026-01-02?は空です。

専門ゼミ(平井) 

引継ぎ・flask(清水) 

章立て 


メモ 


コピー用_3d 


コピー用 


indows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36'\n # }\n p_res = requests.get(p_url)\n p_soup = BeautifulSoup(p_res.text, "html.parser")\n # print(p_soup)\n p_i = p_soup.select('#review-pre-area > div > div > div > p')\n # print(p_i)\n if p==1:\n n_page=12\n else:\n n_page=9\n for i in range(n_page):\n p_review.append(p_i[i].text)\n # # p_p = p_i[i].find('span')\nwith open('review.csv', 'w', newline=, errors='ignore') as f:\n writer = csv.writer(f)\n for row in p_review:\n writer.writerow([row.replace('\u2014', )])\n# print(p_i[0])\n# for p in range(1, pages+1):\n# url = f'https://eiga.com//user//{user}//review//update//{p}'\n# res = requests.get(url)\n# soup = BeautifulSoup(res.text, "html.parser")\n# list = soup.select('.review-title > a')\n# l = 0\n# for p in list:\n# i_url = list[l].get('href')\n# item_url = i_url.replace('/', '//')\n# next_url = f'https://eiga.com{item_url}'\n# n_res = requests.get(next_url)\n# n_soup = BeautifulSoup(n_res.text, "html.parser")\n# n_block = n_soup.find('div', attrs={'class': 'txt-block'})\n# n_review = n_block.find('p')\n# # print(n_review.text)\n# review_con.append(n_review.text)\n# l += 1\n# # print(n_review.text)\n# # print(review_con[1])\n# with open('review.csv', 'w', newline=) as f:\n# writer = csv.writer(f)\n# for row in review_con:\n# writer.writerow([row.replace('\u2014', )]))


トップ   新規 一覧 検索 最終更新   ヘルプ   最終更新のRSS