您的位置:首页 > 运维架构 > Shell

Linux Shell学习--curl命令详解

2017-09-10 20:54 267 查看
curl命令详解(1)curl介绍作为一款强力工具,curl支持包括HTTP、HTTPS、FTP在内的众多协议。它还支持POST、cookie、认证、从指定偏移处下载部分文件、参照页(referer)、用户代理字符串、扩展头部、限速、文件大小限制、进度条等特性。如果要和网页访问序列(web page usagesequence)以及数据检索自动化打交道,那么curl定能助你一臂之力。(2)curlhelp
curl --help
Usage: curl [options...] <url>
Options: (H) means HTTP/HTTPS only, (F) means FTP only
--anyauth       Pick "any" authentication method (H)
-a/--append        Append to target file when uploading (F/SFTP)
--basic         Use HTTP Basic Authentication (H)
--cacert <file> CA certificate to verify peer against (SSL)
--capath <directory> CA directory to verify peer against (SSL)
-E/--cert <cert[:passwd]> Client certificate file and password (SSL)
--cert-type <type> Certificate file type (DER/PEM/ENG) (SSL)
--ciphers <list> SSL ciphers to use (SSL)
--compressed    Request compressed response (using deflate or gzip)
-K/--config <file> Specify which config file to read
--connect-timeout <seconds> Maximum time allowed for connection
-C/--continue-at <offset> Resumed transfer offset
-b/--cookie <name=string/file> Cookie string or file to read cookies from (H)
-c/--cookie-jar <file> Write cookies to this file after operation (H)
--create-dirs   Create necessary local directory hierarchy
--crlf          Convert LF to CRLF in upload
--crlfile <file> Get a CRL list in PEM format from the given file
-d/--data <data>   HTTP POST data (H)
--data-ascii <data>  HTTP POST ASCII data (H)
--data-binary <data> HTTP POST binary data (H)
--data-urlencode <name=data/name@filename> HTTP POST data url encoded (H)
--delegation STRING GSS-API delegation permission
--digest        Use HTTP Digest Authentication (H)
--disable-eprt  Inhibit using EPRT or LPRT (F)
--disable-epsv  Inhibit using EPSV (F)
-D/--dump-header <file> Write the headers to this file
--egd-file <file> EGD socket path for random data (SSL)
--engine <eng>  Crypto engine to use (SSL). "--engine list" for list
-f/--fail          Fail silently (no output at all) on HTTP errors (H)
-F/--form <name=content> Specify HTTP multipart POST data (H)
--form-string <name=string> Specify HTTP multipart POST data (H)
--ftp-account <data> Account data to send when requested by server (F)
--ftp-alternative-to-user <cmd> String to replace "USER [name]" (F)
--ftp-create-dirs Create the remote dirs if not present (F)
--ftp-method [multicwd/nocwd/singlecwd] Control CWD usage (F)
--ftp-pasv      Use PASV/EPSV instead of PORT (F)
-P/--ftp-port <address> Use PORT with address instead of PASV (F)
--ftp-skip-pasv-ip Skip the IP address for PASV (F)
--ftp-ssl       Try SSL/TLS for ftp transfer (F)
--ftp-ssl-ccc   Send CCC after authenticating (F)
--ftp-ssl-ccc-mode [active/passive] Set CCC mode (F)
--ftp-ssl-control Require SSL/TLS for ftp login, clear for transfer (F)
--ftp-ssl-reqd  Require SSL/TLS for ftp transfer (F)
-G/--get           Send the -d data with a HTTP GET (H)
-g/--globoff       Disable URL sequences and ranges using {} and []
-H/--header <line> Custom header to pass to server (H)
-I/--head          Show document info only
-h/--help          This help text
--hostpubmd5 <md5> Hex encoded MD5 string of the host public key. (SSH)
-0/--http1.0       Use HTTP 1.0 (H)
--ignore-content-length  Ignore the HTTP Content-Length header
-i/--include       Include protocol headers in the output (H/F)
-k/--insecure      Allow connections to SSL sites without certs (H)
--interface <interface> Specify network interface/address to use
-4/--ipv4          Resolve name to IPv4 address
-6/--ipv6          Resolve name to IPv6 address
-j/--junk-session-cookies Ignore session cookies read from file (H)
--keepalive-time <seconds> Interval between keepalive probes
--key <key>     Private key file name (SSL/SSH)
--key-type <type> Private key file type (DER/PEM/ENG) (SSL)
--krb <level>   Enable Kerberos with specified security level (F)
--libcurl <file> Dump libcurl equivalent code of this command line
--limit-rate <rate> Limit transfer speed to this rate
-l/--list-only     List only names of an FTP directory (F)
--local-port <num>[-num] Force use of these local port numbers
-L/--location      Follow Location: hints (H)
--location-trusted Follow Location: and send auth to other hosts (H)
-M/--manual        Display the full manual
--max-filesize <bytes> Maximum file size to download (H/F)
--max-redirs <num> Maximum number of redirects allowed (H)
-m/--max-time <seconds> Maximum time allowed for the transfer
--negotiate     Use HTTP Negotiate Authentication (H)
-n/--netrc         Must read .netrc for user name and password
--netrc-optional Use either .netrc or URL; overrides -n
-N/--no-buffer     Disable buffering of the output stream
--no-keepalive  Disable keepalive use on the connection
--no-sessionid  Disable SSL session-ID reusing (SSL)
--noproxy       Comma-separated list of hosts which do not use proxy
--ntlm          Use HTTP NTLM authentication (H)
-o/--output <file> Write output to <file> instead of stdout
--pass  <pass>  Pass phrase for the private key (SSL/SSH)
--post301       Do not switch to GET after following a 301 redirect (H)
--post302       Do not switch to GET after following a 302 redirect (H)
-#/--progress-bar  Display transfer progress as a progress bar
-x/--proxy <host[:port]> Use HTTP proxy on given port
--proxy-anyauth Pick "any" proxy authentication method (H)
--proxy-basic   Use Basic authentication on the proxy (H)
--proxy-digest  Use Digest authentication on the proxy (H)
--proxy-negotiate Use Negotiate authentication on the proxy (H)
--proxy-ntlm    Use NTLM authentication on the proxy (H)
-U/--proxy-user <user[:password]> Set proxy user and password
--proxy1.0 <host[:port]> Use HTTP/1.0 proxy on given port
-p/--proxytunnel   Operate through a HTTP proxy tunnel (using CONNECT)
--pubkey <key>  Public key file name (SSH)
-Q/--quote <cmd>   Send command(s) to server before file transfer (F/SFTP)
--random-file <file> File for reading random data from (SSL)
-r/--range <range> Retrieve only the bytes within a range
--raw           Pass HTTP "raw", without any transfer decoding (H)
-e/--referer       Referer URL (H)
-O/--remote-name   Write output to a file named as the remote file
--remote-name-all Use the remote file name for all URLs
-R/--remote-time   Set the remote file's time on the local output
-X/--request <command> Specify request command to use
--retry <num>   Retry request <num> times if transient problems occur
--retry-delay <seconds> When retrying, wait this many seconds between each
--retry-max-time <seconds> Retry only within this period
-S/--show-error    Show error. With -s, make curl show errors when they occur
-s/--silent        Silent mode. Don't output anything
--socks4 <host[:port]> SOCKS4 proxy on given host + port
--socks4a <host[:port]> SOCKS4a proxy on given host + port
--socks5 <host[:port]> SOCKS5 proxy on given host + port
--socks5-hostname <host[:port]> SOCKS5 proxy, pass host name to proxy
--socks5-gssapi-service <name> SOCKS5 proxy service name for gssapi
--socks5-gssapi-nec  Compatibility with NEC SOCKS5 server
-Y/--speed-limit   Stop transfer if below speed-limit for 'speed-time' secs
-y/--speed-time    Time needed to trig speed-limit abort. Defaults to 30
-2/--sslv2         Use SSLv2 (SSL)
-3/--sslv3         Use SSLv3 (SSL)
--stderr <file> Where to redirect stderr. - means stdout
--tcp-nodelay   Use the TCP_NODELAY option
-t/--telnet-option <OPT=val> Set telnet option
-z/--time-cond <time> Transfer based on a time condition
-1/--tlsv1         Use => TLSv1 (SSL)
--tlsv1.0       Use TLSv1.0 (SSL)
--tlsv1.1       Use TLSv1.1 (SSL)
--tlsv1.2       Use TLSv1.2 (SSL)
--trace <file>  Write a debug trace to the given file
--trace-ascii <file> Like --trace but without the hex output
--trace-time    Add time stamps to trace/verbose output
-T/--upload-file <file> Transfer <file> to remote site
--url <URL>     Set URL to work with
-B/--use-ascii     Use ASCII/text transfer
-u/--user <user[:password]> Set server user and password
-A/--user-agent <string> User-Agent to send to server (H)
-v/--verbose       Make the operation more talkative
-V/--version       Show version number and quit
-w/--write-out <format> What to output after completion
-q                 If used as the first parameter disables .curlrc
(3)curl实战示例1:不带参数的curl
$ curl URL
不带任何参数,curl会将下载文件输出到stdout,将进度信息输出到stderr示例2:避免curl命令显示进度信息可以使用--silent选项。
$ curl --silent URL
[root@MuBanJi_01 curl]# curl http://10.72.10.5:10010 >a.html
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100  5346  100  5346    0     0  2931k      0 --:--:-- --:--:-- --:--:-- 5220k
[root@MuBanJi_01 curl]#
[root@MuBanJi_01 curl]#
[root@MuBanJi_01 curl]#
[root@MuBanJi_01 curl]# curl http://10.72.10.5:10010 --silent>a.html
[root@MuBanJi_01 curl]# ll
总用量 8
-rw-r--r--. 1 root root 5346 9月  10 19:36 a.html
[root@MuBanJi_01 curl]#
示例3:-O和-o选项-o/--output<file> Write output to <file> instead of stdout-O/--remote-name Write output to a file named as the remotefile选项–o表明将下载数据写入文件,而非标准输出中。文件名由自己手工指定,是必要参数。选项-O表示将下载数据写入文件,不过不需要指定文件名,该文件采用的是从URL中解析出的文件名如果在URL中找不到文件名,则会产生错误。因此要确保URL指向的是远程文件。Curl http://10.72.10.5:10010/ -O --silent就会显示错误信息,这是因为无法从URL中解析出文件名。
$ curl URL --silent –o filename
$ curl URL --silent -O
[root@MuBanJi_01 curl]# ll
总用量 0
[root@MuBanJi_01 curl]# curl  --silent -O   http://10.72.10.5:10010/index.html
[root@MuBanJi_01 curl]# ll
总用量 8
-rw-r--r--. 1 root root 5346 9月  10 19:45 index.html
[root@MuBanJi_01 curl]# curl  --silent -o a.html   http://10.72.10.5:10010/index.html
[root@MuBanJi_01 curl]# ll
总用量 16
-rw-r--r--. 1 root root 5346 9月  10 19:45 a.html
-rw-r--r--. 1 root root 5346 9月  10 19:45 index.html
[root@MuBanJi_01 curl]#
示例4:显示进度条如果需要在下载过程中显示形如 # 的进度条,用 --progress代替 --silent。
$ curl  --progress -O   http://10.72.10.5:10010/index.html
######################################################################## 100.0%
示例5:断点续传curl能够从特定的文件偏移处继续下载。可以通过指定一个偏移量来下载部分文件。
$ curl URL/file -C offset
偏移量是以字节为单位的整数。如果只是想断点续传,那么curl不需要指定准确的字节偏移。要是你希望curl推断出正确的续传位置,请使用选项 -C -,就像这样:
$ curl -C - URL
curl会自动计算出应该从哪里开始续传。示例6:用curl设置参照页字符串参照页(referer)是位于HTTP头部中的一个字符串,用来标识用户是从哪个页面到达当前页面的。如果用户点击了网页A中的某个链接,那么用户就会转到网页B,网页B头部的参照页字符串会包含网页A的URL。一些动态网页会在返回HTML页面前检测参照页字符串。例如,如果用户是通过Google搜索来到了当前网页,网页上会附带显示一个Google的logo;如果用户是通过手动输入URL来到当前网页,则显示另一个不同的页面。网站的作者可以根据条件进行判断:如果参照页是www.google.com,那么就返回一个Google页面,否则返回其他页面。可以用curl命令的 --referer选项指定参照页字符串:
$ curl --referer Referer_URL target_URL
例如:
$ curl --referer http://google.com http://slynux.org
示例7:用curl设置cookie我们可以用curl来指定并存储HTTP操作过程中使用到的cookie。要指定cookie,使用 --cookie"COOKIES"选项。cookies需要以name=value的形式来给出。多个cookie之间使用分号分隔。例如:
$ curl http://example.com --cookie "user=slynux;pass=hack"
如果要将cookie另存为一个文件,使用 --cookie-jar选项。例如:
$ curl URL --cookie-jar cookie_file
示例8:用curl设置用户代理字符串如果不指定用户代理(user agent),一些需要检验用户代理的网页就无法显示。你肯定碰到过有些陈旧的网站只能在InternetExplorer(IE)下正常工作。如果使用其他浏览器,这些网站就会提示说它只能用IE访问。这是因为这些网站检查了用户代理。你可以用curl来设置用户代理。curl的 --user-agent或 -A选项用于设置用户代理:
$ curl URL --user-agent "Mozilla/5.0"
其他HTTP头部信息也可以通过curl来发送。用-H"头部信息"传递多个头部信息。例如:
$ curl -H "Host: www.slynux.org" -H "Accept-language: en" URL
示例9:限定curl可占用的带宽如果带宽有限,又有多个用户共享,为了平稳流畅地分享带宽,我们可以用--limit-rate限制curl的下载速度:
$ curl URL --limit-rate 20k
在命令中用k(千字节)和m(兆字节)指定下载速度限制。示例10:指定最大下载量可以用--max-filesize选项指定可下载的最大文件大小:
$ curl URL --max-filesize bytes
如果文件大小超出限制,命令返回一个非0的退出码。如果命令正常运行,返回0。示例11:用curl进行认证可以用curl的选项 -u完成HTTP或FTP认证。-uusername:password可用来指定用户名和密码。它也可以不指定密码,而在后续的执行过程中按照提示输入密码。例如:
$ curl -u user:pass http://test_auth.com[/code]如果你喜欢经提示后输入密码,只需要使用 -u username即可。例如:
$ curl -u user http://test_auth.com[/code]示例12:只打印响应头部信息(不包括数据部分)-I/--head          Show document info only只打印响应头部(response header)有助于进行各种检查或统计。例如,如果要检查某个页面是否能够打开,并不需要下载整个页面内容。只用读取HTTP响应头部就能够知道这个页面是否可用。检查HTTP头部的一个用法就是在下载之前先获知文件大小。我们可以在下载之前,通过检查HTTP头部中的 Content-Length 参数来得知文件的长度。同样还可以从头部检索出其他一些有用的参数。Last-Modified参数能告诉我们远程文件最后的改动时间。通过 -I或--head 就可以只打印HTTP头部信息,而无须下载远程文件。例如:
[root@MuBanJi_01 curl]# curl -I http://10.72.10.5:10010/index.html
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 10 Sep 2017 12:25:22 GMT
Content-Type: text/html
Content-Length: 5346
Connection: keep-alive
Vary: Accept-Encoding
Last-Modified: Fri, 18 Aug 2017 09:41:43 GMT
Vary: Accept-Encoding
ETag: "5996b657-14e2"
Accept-Ranges: bytes
示例13-w选项,输出指定格式的内容到标准输出-w/--write-out<format> What to output after completion顾名思义,write-out的作用就是输出点什么。curl的-w参数用于在一次完整且成功的操作后输出指定格式的内容到标准输出。输出格式由普通字符串和任意数量的变量组成,输出变量需要按照%{variable_name}的格式,如果需要输出%,double一下即可,即%%,同时,\n是换行,\r是回车,\t是TAB。curl会用合适的值来替代输出格式中的变量,所有可用变量如下:url_effective 最终获取的url地址,尤其是当你指定给curl的地址存在301跳转,且通过-L继续追踪的情形。http_code http状态码,如200成功,301转向,404未找到,500服务器错误等。(The numerical response code that was found in the lastretrieved HTTP(S) or FTP(s) transfer. In 7.18.2 the alias response_code wasadded to show the same info.)
$ curl -I -s -o /dev/null -w %{http_code}"\n"   http://10.72.10.5:10010/index.html[/code]
200
http_connect The numericalcode that was found in the last response (from a proxy) to a curl CONNECTrequest. (Added in 7.12.4)time_total 总时间,按秒计。精确到小数点后三位。(The total time,in seconds, that the full operation lasted. The time will be displayed withmillisecond resolution.)time_namelookup DNS解析时间,从请求开始到DNS解析完毕所用时间。(The time, in seconds, it took from the start until thename resolving was completed.)time_connect 连接时间,从开始到建立TCP连接完成所用时间,包括前边DNS解析时间,如果需要单纯的得到连接时间,用这个time_connect时间减去前边time_namelookup时间。以下同理,不再赘述。(The time, inseconds, it took from the start until the TCP connect to the remote host (orproxy) was completed.)time_appconnect 连接建立完成时间,如SSL/SSH等建立连接或者完成三次握手时间。(The time, inseconds, it took from the start until the SSL/SSH/etc connect/handshake to theremote host was completed. (Added in 7.19.0))time_pretransfer开始到准备传输的时间。(The time, in seconds, it took from the start until thefile transfer was just about to begin. This includes all pre-transfer commandsand negotiations that are specific to the particular protocol(s) involved.)time_redirect 重定向时间,包括到最后一次传输前的几次重定向的DNS解析,连接,预传输,传输时间。(The time, inseconds, it took for all redirection steps include name lookup, connect,pretransfer and transfer before the final transaction was started.time_redirect shows the complete execution time for multiple redirections.(Added in 7.12.3))time_starttransfer 开始传输时间。在发出请求之后,Web 服务器返回数据的第一个字节所用的时间(The time, inseconds, it took from the start until the first byte was just about to betransferred. This includes time_pretransfer and also the time the server neededto calculate the result.)size_download 下载大小。(The total amount of bytes that were downloaded.)size_upload 上传大小。(The total amount of bytes that were uploaded.)size_header 下载的header的大小(The totalamount of bytes of the downloaded headers.)size_request 请求的大小。(The total amount of bytes that were sent in the HTTPrequest.)speed_download 下载速度,单位-字节每秒。(The averagedownload speed that curl measured for the complete download. Bytes per second.)speed_upload 上传速度,单位-字节每秒。(The average upload speed that curl measured for thecomplete upload. Bytes per second.)content_type 就是content-Type, (text/html;charset=UTF-8);(TheContent-Type of the requested document, if there was any.)num_connectsNumber of newconnects made in the recent transfer. (Added in 7.12.3)num_redirectsNumber ofredirects that were followed in the request. (Added in 7.12.3)redirect_url When a HTTPrequest was made without -L to follow redirects, this variable will show theactual URL a redirect would take you to. (Added in 7.18.2)ftp_entry_pathThe initial pathlibcurl ended up in when logging on to the remote FTP server. (Added in 7.15.4)ssl_verify_result ssl认证结果,返回0表示认证成功。( The result ofthe SSL peer certificate verification that was requested. 0 means theverification was successful. (Added in 7.19.0))
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Linux