我正在尝试提取 github 中所有存储库的名称并构建一个脚本文件来克隆所有这些,使用这个 bash 脚本:
for i in {1..10}
do
curl -u USERNAME:PASS -s https://api.github.com/user/repos?page=$i | grep -oP '"clone_url": "\K(.*)"' > output$i.txt
done
这是我的脚本在单行中输出每个 repo 名称,但我需要插入 git clone 到每一行的开头所以我写了这个(添加 | xargs -L1 git clone),它没有'工作:
for i in {1..10}
do
curl -u USERNAME:PASS -s https://api.github.com/user/repos?page=$i | grep -oP '"clone_url": "\K(.*)"' | xargs -L1 git clone > output$i.txt
done
最佳答案
使用 jq
始终是解析 JSON 数据的最佳选择:
#!/usr/bin/env bash
for i in {1..10}
do
curl \
--user USERNAME:PASS \
--silent \
"https://api.github.com/user/repos?page=${i}" \
| jq \
--raw-output '.[] | "git clone \(.clone_url)"' \
> "output${i}.txt"
done
或者要处理任意数量的页面,您可以告诉 jq
在 $?
中返回一个非空返回码,方法是为它提供 --exit-status
选项。
然后,如果 JSON 选择器没有返回结果(当返回的 GitHub API 的结果页面为空时发生这种情况),可以测试 jq
返回码以继续或终止 while 循环:
#!/usr/bin/env bash
typeset -i page=1 # GitHub API paging starts at page 1
while clone_cmds="$(
curl \
--user USERNAME:PASS \
--silent \
"https://api.github.com/user/repos?page=${page}" \
| jq \
--exit-status \
--raw-output \
'.[] | "git clone \(.clone_url)"'
)"; do
# The queried page result length is > 0
# Output to the paged file
# and increase page number
echo >"output$((page++)).txt" "${clone_cmds}"
done
如果您想要与上述相同,但所有存储库都在一个文件中。
以下示例采用 GitHub API 处理页面,而不是依赖额外的空请求来标记页面结束。
它现在还可以处理最多 100 个条目的页面,并在支持的情况下协商压缩传输流。
这是您的存储库克隆列表的特色版本:
#!/usr/bin/env bash
# Set either one to authenticate with the GitHub API.
# GitHub 'Oauth2 token':
OAUTH_TOKEN=''
# GitHub 'username:password':
USER_PASS=''
# The GitHub API Base URL:
typeset -r GITHUB_API='https://api.github.com'
# The array of Curl options to authenticate with GitHub:
typeset -a curl_auth
# Populates the authentication options from what is available.
if [[ -n ${OAUTH_TOKEN} ]]; then
curl_auth=(--header "Authorization: token ${OAUTH_TOKEN}")
elif [[ -n ${USER_PASS} ]]; then
curl_auth=(--user "${USER_PASS}")
else
# These $"string" are bash --dump-po-strings ready.
printf >&2 $"GitHub API need an authentication with either set variable:"$'\n'
printf >&2 "OAUTH_TOKEN='%s'\\n" $"GitHub API's Oauth2 token"
printf >&2 $"or"" USER_PASS='%s:%s'.\\n" $"username" $"password"
printf >&2 $"See: %s"$'\n' 'https://developer.github.com/v3/#authentication'
exit 1
fi
# Query the GitHub API for user repositories.
# The default results count per page is 30.
# It can be raised up to 100, to limit the number
# of requests needed to retrieve all the results.
# Response headers contains a Link: <url>; rel="next" as
# long as there is a next page.
# See: https://developer.github.com/v3/#pagination
# Compose the API URL for the first page.
next_page_url="${GITHUB_API}/user/repos?per_page=100&page=1"
# While there is a next page URL to query...
while [[ -n ${next_page_url} ]]; do
# Send the API request with curl, and get back a complete
# http_response witch --include response headers, and
# if supported, handle a --compressed data stream,
# keeping stderr &2 --silent.
http_response="$(
curl \
--silent \
--include \
--compressed \
"${curl_auth[@]}" \
"${next_page_url}"
)"
# Get the next page URL from the Link: header.
# Reaching the last page, causes the next_page_url
# variable to be empty.
next_page_url="$(
sed \
--silent \
'/^[[:space:]]*$/,$d;s/Link:.*<\(.*\)>;[[:space:]]*rel="next".*$/\1/p' \
<<<"${http_response}"
)"
# Get the http_body part from the http_response.
http_body="$(sed '1,/^[[:space:]]*$/d' <<<"${http_response}")"
# Query the http_body JSON content with jq.
jq --raw-output '.[] | "git clone \(.clone_url)"' <<<"${http_body}"
done >"output.txt" # Redirect the whole while loop output to the file.
关于bash - 在 grep 之后添加文本到每行的开头,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57132077/