本文共 2824 字,大约阅读时间需要 9 分钟。
(上一步得到的代码) | egrep -o 'data-original="' | egrep -o 'https://' | sort | uniq
通过返回的内容可以看到只包含了前三个问题的url继续分析,在request header中又发现了这样的字段 authorization: oauth
添加上 curl –user-agent ‘XXX’ -H ‘authorization: oauth c3cef7c66a1843f8b3a9e6a1e3160e20 ‘ ‘api_url’这次返回了正常的JSON格式的数据,思路就很清晰了#!/bin/bash# download.sh# 爬取知乎某问题下的所有图片# 使用 ./download.SH https://www.zhihu.com/question/XXXXXXif [ ! $1 ]then echo 'need question url' exit 1fi# 获取问题numques_num=`echo $1 | egrep -o '[0-9]+'`function gethtml(){ curl --user-agent 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36' -H 'authorization: oauth c3cef7c66a1843f8b3a9e6a1e3160e20' $1}gethtml $1 | egrep -o 'data-original="[^"]*' | egrep -o 'https://[^ ]*'| sort |uniq >> $$.logapi="https://www.zhihu.com/api/v4/questions/${ques_num}/answers?sort_by=default&include=data%5B%2A%5D.is_normal%2Cis_collapsed%2Cannotation_action%2Cannotation_detail%2Ccollapse_reason%2Cis_sticky%2Ccollapsed_by%2Csuggest_edit%2Ccomment_count%2Ccan_comment%2Ccontent%2Ceditable_content%2Cvoteup_count%2Creshipment_settings%2Ccomment_permission%2Cmark_infos%2Ccreated_time%2Cupdated_time%2Creview_info%2Crelationship.is_authorized%2Cis_author%2Cvoting%2Cis_thanked%2Cis_nothelp%2Cupvoted_followees%3Bdata%5B%2A%5D.author.follower_count%2Cbadge%5B%3F%28type%3Dbest_answerer%29%5D.topics&limit=20&offset="# 根据offset获取图片链接到$$.logoffset=3total=`gethtml $api$offset | egrep -o '"totals": [0-9][^,]*' | egrep -o '[0-9]+'`total=`expr $total + 20`api_html=''for((offset=3;offset<$total;offset+=20))do api_html=`echo $api${offset}` gethtml $api_html| egrep -o 'data-original=\\"[^\]*' | egrep -o 'https://[^ ]*'|sort|uniq>>$$.log &donewaitecho 'get img url complete'max_th=50 #指定最大线程数,防止过多进程过分消耗资源# 线程控制function getimg(){ if [[ $max_th -ge $((`ps | grep download.sh | wc -l` - 1)) ]] # 减一是前去‘grep fileme'的行 then { wget $url }& else getimg fi}# 下载图片for url in `cat $$.log`do getimgdonewaitecho 'downlad complete'
将下载下来的图片,写个脚本放进网页中,然后愉快的...(当然之前的html代码是要自己写的,要用lazy加载哦,因为图片太多...)
#!/bin/bash#将图片链接写到网页中去for name in `ls | egrep -o '[a-z0-9]+\.(jp|jpe|pn)g'`do echo ''doneecho ' '
转载地址:http://eyzvx.baihongyu.com/