欢迎访问 Fabric 中文文档¶
本站覆盖了 Fabric 的用法和 API 文档,包括变更历史和维护信息等 Fabric 基本信息见 Fabric 官方网站 。
入门教程¶
对于新用户,以及/或想大概了解 Fabric 基本功能的同学,请访问 概览 & 教程 。本文档的其它部分将假设你至少已经大概熟悉其中的内容。
概览 & 教程¶
欢迎使用 Fabric!
本文档走马观花式地介绍 Fabric 特性,也是对其使用的快速指导。其他文档(这里通篇的链接都指向它们)可以在 usage documentation 中找到——请不要忘了一并阅读。
Fabric 是什么?¶
如 README 所说:
Fabric 是一个 Python (2.5-2.7) 的库和命令行工具,用来提高基于 SSH 的应用部署和系统管理效率。
更具体地说,Fabric 是:
一个让你通过 命令行 执行 无参数 Python 函数 的工具;
一个让通过 SSH 执行 Shell 命令更加 容易 、 更符合 Python 风格 的命令库(建立于一个更低层次的库)。
自然而然地,大部分用户把这两件事结合着用,使用 Fabric 来写和执行 Python 函数或 task ,以实现与远程服务器的自动化交互。让我们一睹为快吧。
Hello, fab
¶
一个合格的教程少不了这个“惯例”:
def hello():
print("Hello world!")
把上述代码放在你当前的工作目录中一个名为 fabfile.py
的 Python 模块文件中。然后这个 hello
函数就可以用 fab
工具(随 Fabric 一并安装的命令)来执行了,输出的结果会是这样:
$ fab hello
Hello world!
Done.
以上就是配置文件的全部。它基于 Fabric 实现了一个(极其)简单的构建工具,简单到甚至不用导入任何 Fabric API。
注解
fab
工具所做的只是导入 fabfile 并执行了相应一个或多个的函数,这里并没有任何魔法——任何你能在一个普通 Python 模块中做的事情同样可以在一个 fabfile 中完成。
任务参数¶
和你平时的 Python 编程一样,给任务函数传递参数很有必要``。Fabric 支持 Shell 兼容的参数用法: <任务名>:<参数>, <关键字参数名>=<参数值>,...
用起来就是这样,下面我们用一个 say hello 的实例来展开说明一下:
def hello(name="world"):
print("Hello %s!" % name)
默认情况下, fab hello
的调用结果仍和之前相同,但现在我们可以做些个性化定制了:
$ fab hello:name=Jeff
Hello Jeff!
Done.
用过 Python 编程的同学可能已经猜到了,这样调用会输出一样的结果:
$ fab hello:Jeff
Hello Jeff!
Done.
目前,参数值只能作为 Python 字符串来使用,如果要使用列表这样的复杂类型,需要一些字符串操作处理。将来的版本可能会添加一个类型转换系统以简化这类处理。
本地命令¶
在前面的例子中, fab 实际上只节省了数行 if __name__ == "__main__"
这样的惯例代码而已。Fabric 的设计目的更是为了使用它自己的 API,包括执行 Shell 命令、传送文件等函数(或操作)接口。
假设我们需要为一个 web 应用创建 fabfile 。具体的情景如下:这个 web 应用的代码使用 git 托管在一台远程服务器 vcshost
上,我们把它的代码库克隆到了本地 localhost
中。我们希望在我们把修改后的代码 push 回 vcshost 时,自动把新的版本安装到另一台远程服务器 my_server
上。我们将通过自动化本地和远程 git 命令来完成这些工作。
关于 fabfile 文件放置位置的最佳时间是项目的根目录:
.
|-- __init__.py
|-- app.wsgi
|-- fabfile.py <-- our fabfile!
|-- manage.py
`-- my_app
|-- __init__.py
|-- models.py
|-- templates
| `-- index.html
|-- tests.py
|-- urls.py
`-- views.py
注解
在这里我们使用一个 Django 应用为例——不过 Fabric 并s依赖于外部代码,除了它的 SSH 库。
作为起步,我们希望先执行测试准备好部署后,再提交到 VCS(版本控制系统):
from fabric.api import local
def prepare_deploy():
local("./manage.py test my_app")
local("git add -p && git commit")
local("git push")
这段代码的输出会是这样:
$ fab prepare_deploy
[localhost] run: ./manage.py test my_app
Creating test database...
Creating tables
Creating indexes
..........................................
----------------------------------------------------------------------
Ran 42 tests in 9.138s
OK
Destroying test database...
[localhost] run: git add -p && git commit
<interactive Git add / git commit edit message session>
[localhost] run: git push
<git push session, possibly merging conflicts interactively>
Done.
这段代码很简单,导入一个 Fabric API: local
,然后用它执行本地 Shell 命令并与之交互,剩下的 Fabric API 也都类似——它们都只是 Python。
用你的方式来组织¶
因为 Fabric “只是 Python”,所以你可以按你喜欢的方式来组织 fabfile 。比如说,把任务分割成多个子任务:
from fabric.api import local
def test():
local("./manage.py test my_app")
def commit():
local("git add -p && git commit")
def push():
local("git push")
def prepare_deploy():
test()
commit()
push()
这个 prepare_deploy
任务仍可以像之前那样调用,但现在只要你愿意,就可以调用更细粒度的子任务。
故障¶
我们的基本案例已经可以正常工作了,但如果测试失败了会怎样?我们应该抓住机会即使停下任务,并在部署之前修复这些失败的测试。
Fabric 会检查被调用程序的返回值,如果这些程序没有干净地退出,Fabric 会终止操作。下面我们就来看看如果一个测试用例遇到错误时会发生什么:
$ fab prepare_deploy
[localhost] run: ./manage.py test my_app
Creating test database...
Creating tables
Creating indexes
.............E............................
======================================================================
ERROR: testSomething (my_project.my_app.tests.MainTests)
----------------------------------------------------------------------
Traceback (most recent call last):
[...]
----------------------------------------------------------------------
Ran 42 tests in 9.138s
FAILED (errors=1)
Destroying test database...
Fatal error: local() encountered an error (return code 2) while executing './manage.py test my_app'
Aborting.
太好了!我们什么都不用做,Fabric 检测到了错误并终止,不会继续执行 commit 任务。
故障处理¶
但如果我们想更加灵活,给用户另一个选择,该怎么办?一个名为 warn_only 的设置(或着说 环境变量 ,通常缩写为 env var )可以把退出换为警告,以提供更灵活的错误处理。
让我们把这个设置丢到 test
函数中,然后注意这个 local
调用的结果:
from __future__ import with_statement
from fabric.api import local, settings, abort
from fabric.contrib.console import confirm
def test():
with settings(warn_only=True):
result = local('./manage.py test my_app', capture=True)
if result.failed and not confirm("Tests failed. Continue anyway?"):
abort("Aborting at user request.")
[...]
为了引入这个新特性,我们需要添加一些新东西:
在 Python 2.5 中,需要从
__future__
中导入with
;Fabric
contrib.console
子模块提供了confirm
函数,用于简单的 yes/no 提示。settings
上下文管理器提供了特定代码块特殊设置的功能。local
这样运行命令的操作会返回一个包含执行结果(.failed
或.return_code
属性)的对象。abort
函数用于手动停止任务的执行。
即使增加了上述复杂度,整个处理过程仍然很容易理解,而且它已经远比之前灵活。
建立连接¶
让我们回到 fabfile 的主旨:定义一个 deploy
任务,让它在一台或多台远程服务器上运行,并保证代码是最新的:
def deploy():
code_dir = '/srv/django/myproject'
with cd(code_dir):
run("git pull")
run("touch app.wsgi")
这里再次引入了一些新的概念:
Fabric 是 Python——所以我们可以自由地使用变量、字符串等常规的 Python 代码;
cd
函数是一个简易的前缀命令,相当于运行cd /to/some/directory
,和lcd
函数类似,只不过后者是在本地执行。~fabric.operations.run` 和
local
类似,不过是在 远程 而非本地执行。
我们还需要保证在文件顶部导入了这些新函数:
from __future__ import with_statement
from fabric.api import local, settings, abort, run, cd
from fabric.contrib.console import confirm
改好之后,我们重新部署:
$ fab deploy
No hosts found. Please specify (single) host string for connection: my_server
[my_server] run: git pull
[my_server] out: Already up-to-date.
[my_server] out:
[my_server] run: touch app.wsgi
Done.
我们并没有在 fabfile 中指定任何连接信息,所以 Fabric 依旧不知道该在哪里运行这些远程命令。遇到这种情况时,Fabric 会在运行时提示我们。连接的定义使用 SSH 风格的“主机串”(例如: user@host:port ),默认使用你的本地用户名——所以在这个例子中,我们只需要指定主机名 my_server
。
与远程交互¶
如果你已经得到了代码,说明 git pull
执行非常顺利——但如果这是第一次部署呢?最好也能应付这样的情况,这时应该使用 git clone
来初始化代码库:
def deploy():
code_dir = '/srv/django/myproject'
with settings(warn_only=True):
if run("test -d %s" % code_dir).failed:
run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir)
with cd(code_dir):
run("git pull")
run("touch app.wsgi")
和上面调用 local
一样, run
也提供基于 Shell 命令构建干净的 Python 逻辑。这里最有趣的部分是 git clone
:因为我们是用 git 的 SSH 方法来访问 git 服务器上的代码库,这意味着我们远程执行的 run
需要自己提供身份验证。
旧版本的 Fabric(和其他类似的高层次 SSH 库)像在监狱里一样运行远程命令,无法提供本地交互。当你迫切需要输入密码或者与远程程序交互时,这就很成问题。
Fabric 1.0 和后续的版本突破了这个限制,并保证你和另一端的会话交互。让我们看看当我们在一台没有 git checkout 的新服务器上运行更新后的 deploy 任务时会发生什么:
$ fab deploy
No hosts found. Please specify (single) host string for connection: my_server
[my_server] run: test -d /srv/django/myproject
Warning: run() encountered an error (return code 1) while executing 'test -d /srv/django/myproject'
[my_server] run: git clone user@vcshost:/path/to/repo/.git /srv/django/myproject
[my_server] out: Cloning into /srv/django/myproject...
[my_server] out: Password: <enter password>
[my_server] out: remote: Counting objects: 6698, done.
[my_server] out: remote: Compressing objects: 100% (2237/2237), done.
[my_server] out: remote: Total 6698 (delta 4633), reused 6414 (delta 4412)
[my_server] out: Receiving objects: 100% (6698/6698), 1.28 MiB, done.
[my_server] out: Resolving deltas: 100% (4633/4633), done.
[my_server] out:
[my_server] run: git pull
[my_server] out: Already up-to-date.
[my_server] out:
[my_server] run: touch app.wsgi
Done.
注意那个 Password:
提示——那就是我们在 web 服务器上的远程 git
应用在请求 git 密码。我们可以在本地输入密码,然后像往常一样继续克隆。
参见
预定义连接¶
在运行输入连接信息已经是非常古老的做法了,Fabric 提供了一套在 fabfile 或命令行中指定服务器信息的简单方法。这里我们不展开说明,但是会展示最常用的方法:设置全局主机列表 env.hosts 。
env 是一个全局的类字典对象,是 Fabric 很多设置的基础,也能在 with 表达式中使用(事实上,前面见过的 ~fabric.context_managers.settings
就是它的一个简单封装)。因此,我们可以在模块层次上,在 fabfile 的顶部附近修改它,就像这样:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
env.hosts = ['my_server']
def test():
do_test_stuff()
当 fab
加载 fabfile 时,将会执行我们对 env
的修改并保存设置的变化。最终结果如上所示:我们的 deploy
任务将在 my_server
上运行。
这就是如何指定 Fabric 一次性控制多台远程服务器的方法: env.hosts
是一个列表, fab
对它迭代,对每个连接运行指定的任务。
总结¶
虽然经历了很多,我们的 fabfile 文件仍然相当短。下面是它的完整内容:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
env.hosts = ['my_server']
def test():
with settings(warn_only=True):
result = local('./manage.py test my_app', capture=True)
if result.failed and not confirm("Tests failed. Continue anyway?"):
abort("Aborting at user request.")
def commit():
local("git add -p && git commit")
def push():
local("git push")
def prepare_deploy():
test()
commit()
push()
def deploy():
code_dir = '/srv/django/myproject'
with settings(warn_only=True):
if run("test -d %s" % code_dir).failed:
run("git clone user@vcshost:/path/to/repo/.git %s" % code_dir)
with cd(code_dir):
run("git pull")
run("touch app.wsgi")
但它已经涉及到了 Fabric 中的很多功能:
定义 fabfile 任务,并用 fab 执行;
用
local
调用本地 shell 命令;通过
settings
修改 env 变量;处理失败命令、提示用户、手动取消任务;
以及定义主机列表、使用
run
来执行远程命令。
还有更多这里没有涉及到的内容,你还可以看看所有“参见”中的链接,以及 索引页 的内容表。
Thanks for reading!
使用文档¶
下面的列表包含了 Fabric (非 API 部分)文档的主要章节。这些内容对 概览 & 教程 中提到的概念进行了扩展,同时还覆盖了一些高级主题。
环境字典 env
¶
Fabric 中有一个简单但是必不可少的部分叫做“环境”:它是 Python 字典的子类,既用作设置,也用于任务间数据空间共享。
目前,环境字典 fabric.state.env
是作为全局的单例实现的,为方便使用也包含在 fabric.api
中。 env
中的键通常也被称为“环境变量”。
运行环境即设置¶
Fabric 的大部分行为可以通过修改 env
变量,例如 env.hosts
,来控制(已经在 入门导览 中见过)。其他经常需要修改的环境变量包括:
user
:Fabric 在建立 SSH 连接时默认使用本地用户名,必要情况下可以通过修改env.user
来设置。 Execution model 文档中还介绍了如何为每个主机单独设置用户名。password
:用来显式设置默认连接或者在需要的时候提供 sudo 密码。如果没有设置密码或密码错误,Fabric 将会提示你输入。warn_only
:布尔值,用来设置 Fabric 是否在检测到远程错误时退出。访问 Execution model 以了解更多关于此行为的信息。
除了这些以外还有很多其它环境变量, 环境变量完整列表 文档的底部提供了完整的列表。
settings
会话管理器¶
很多时候,只需要临时修改 env
变量来修改指定设置很有必要。Fabric 提供了会话管理器 settings
上下文管理器,接受一个或多个键/值对参数,用于修改其代码块内部的 env
。
在下面的几种情况下 warn_only
设置非常有必要。需要设置环境变量应用的范围,你可以参照下面 contrib
exists
函数的简化版,使用 settings(warn_only=True)
。
from fabric.api import settings, run
def exists(path):
with settings(warn_only=True):
return run('test -e %s' % path)
其他考虑¶
env
虽然是 dict
的子类,但它也做了些修改,以支持属性访问的方式进行读/写,这在前面也有所体现。换句话说, .host_string
和 env['host_string']
的作用是完全一样的。我们感觉属性访问通常可以少打一些打字,同时能增强代码的可读性,所以这也是推荐的与 env 交互的方式。
作为字典在其他方面也很有用,例如,需要往字符串中插入多个环境变量时,通过 Python 基于 dict 的字符串格式化显得尤其方便。“普通”的字符串格式化是这样的:
print("Executing on %s as %s" % (env.host, env.user))
使用字典格式化字符串更加简短,可读性也更好:
print("Executing on %(host)s as %(user)s" % env)
环境变量完整列表¶
以下是所有预定义(或在 Fabric 运行时定义)的环境变量的完整列表。它们中的大部分都可以直接操作,但最好还是使用 context_managers
,可以通过 settings
或特定的上下文管理器,如 cd
。
需注意的是它们中的大部分可以通过 fab 的命令行参数来设置,详细文档参见 fab 选项和参数 。合适的时候也可以使用交叉引用。
参见
abort_exception
¶
Default: None
通常情况下,Fabric 处理错误的方式是先打印错误信息至 stderr,然后调用 sys.exit(1)。这项设置提供覆盖这个默认行为(即设置 env.abort_exception
为 None
)。
它接受一个接受单个字符串变量(需要打印的错误信息)的可调用对象,并返回一个异常实例。这样 Fabric 就会抛出该异常,而非 退出系统
(如 sys.exit
所做)
大部分情况下,你可以简单地将它设置为一个异常类,因为它完美地符合了上面的要求(可调用、接受一个字符串、返回一个异常实例)。例如: env.abort_exception = MyExceptionClass
。
abort_on_prompts
¶
Default: False
当这个值为 True
时,Fabric 将以非交互模式运行。此模式下,任何需要提示用户输入(如提示输入密码、询问连接到哪个主机、fabfile 中触发的 prompt
等等)时,都会调用 abort
。这就保证 Fabric 会话总是明确地中止,而不是在某些意外的情况下傻傻地等待用户输入。
1.1 新版功能.
always_use_pty
¶
Default: True
设置为 False
时, ~fabric.operations.run`/`~fabric.operations.sudo` 的行为会和使用 ``pty=False
参数调用一样。
参见
1.0 新版功能.
combine_stderr
¶
Default: True
使 SSH 层合并远程程序的 stdout 和 stderr 流输出,以避免它们在打印时混在一起。查看 合并 stdout 和 stderr 来了解为什么需要这个功能,以及它的实际效果。
1.0 新版功能.
command
¶
Default: None
fab
设置的正在执行的命令名称(例如,执行 $ fab task1 task2
命令,当执行 task1 时, env.command
会被设置为 “task1”
,然后设置为 “task2”
)。仅供显示。
dedupe_hosts
¶
Default: True
去除合并后的主机列表中的重复项,以保证一个主机只会出现一次。(例如,在同时使用 @hosts
和 @roles
,或 -H 和 -R 的时候。)
设置为 False 时不会去除重复项,这将允许用户显式地在同一台主机上将一个任务(并行地,当然也支持串行)运行多次。
1.5 新版功能.
disable_known_hosts
¶
Default: False
如果为 True
,SSH 层将不会加载用户的 know-hosts 文件。这样可以有效地避免当一个“已知主机”改变了 key、但仍然有效时(比如 EC2 这样的云服务器中)的异常。
eagerly_disconnect
¶
Default: False
设置为 True 时, fab
会在每个独立任务完成后关闭连接,而不是在整个运行结束后。这有助于避免大量无用的网络会话堆积,或因每个进程可打开的文件限制,或网络硬件的限制而引发问题。
注解
激活时,断开连接地信息会贯穿你的输出信息始终,而非最后。这一点可能会在以后的版本中得到改进。
fabfile
¶
Default: fabfile.py
fab
在加载 fabfile 时查找的文件名。要指定特定的 fabfile 文件,需要使用该文件的完整路径。显然,这个参数不可能在 fabfile 中设置,但可以将它设置在 .fabricrc 文件中,或者通过命令行参数来设置。
gateway
¶
Default: None
允许通过指定主机创建 SSH 驱动的网关。它的值应该是一个普通的 Fabric 主机串,和 env.host_string 中使用的一样。当它被设置时,新创建的连接将会通过这个远程 SSH 连接到最终的目的地。
1.5 新版功能.
参见
host_string
¶
Default: None
指定 Fabric 在执行 run
、 put
等命令时使用的用户/主机/端口。 fab
在与已设置的主机列表交互时设置这个值,将 Fabric 作为库使用时也可以手动设置它。
keepalive
¶
默认值: 0
(不保持连接)
用于指定 SSH keepalive 间隔的数字,基本上对应 SSH 设置参数 ServerAliveInterval
。如果有多事的网络硬件或者其它因素导致连接超时时会很有帮助。
参见
1.1 新版功能.
linewise
¶
Default: False
强制以行为缓冲区单位,以替换字符/比特,通常用在并行模式下。可以使用 --linewise
参数来激活。env.parallel 模式隐含了这项设置——即使 linewise
为 False,parallel
如果为 True 就会引发行级输出。
1.3 新版功能.
local_user
¶
一个包含本地系统用户名的只读值。该值即 user 的初始值,不过 user 可以通过 CLI 参数、Python 代码或者指定 host 字符串的方式覆盖,local_user 则会一直保持不变。
no_keys
¶
Default: False
如为 True
则告诉 SSH 层不从 $HOME/.ssh/
目录加载密钥。(当然,你可以显示地使用 fab -i
来指定密钥。)
0.9.1 新版功能.
参见
passwords
¶
Default: {}
这个字典主要用于内部使用,and is filled automatically as a per-host-string password cache。键是 host strings 全称,值为密码(字符串格式)。
警告
如果你手动生成该字典,就必须使用完整的主机登录字符,包括用户和登录信息。查看上面的链接以获取主机字符串 API 的详细信息。
1.0 新版功能.
path
¶
Default: ''
用于执行 run
/sudo
/local
等命令时设置 shell 环境变量 $PATH
。推荐使用上下文管理器 path
来管理该值,不建议手动设置。
1.0 新版功能.
remote_interrupt
¶
Default: None
用于设置 Ctrl-C 是用于终止远程命令还是在本地捕获。使用如下:
None
(默认值):只有open_shell
会向远程发送终止命令,run
/sudo
会在本地捕获。False
:即使open_shell
也只在本地捕获。True
:所有函数都可以向远程发送终止命令。
1.6 新版功能.
shell
¶
Default: /bin/bash -l -c
在使用 run
等命令时会使用到,作为 shell 包裹在程序外。该值会像这样使用 <env.shell> "<command goes here>"
——比如默认的 Bash -c
选项可以接受命令字符串作为其参数。
sudo_prefix
¶
Default: "sudo -S -p '%(sudo_prompt)s' " % env
sudo
命令调用时的 sudo
命令前缀。如果用户在远程主机默认 $PATH``上没有 ``sudo
,或者需要一些其它设置时(比如应用无密码的 sudo 时删除 -p
参数),会需要它。
参见
user
¶
Default: 用户的默认用户名
SSH 层连接远程服务器时用到的用户名,可以设置为全局,在不显式指定的情况下都会被应用于主机连接字符串。不过,如果显式指定了,它将被临时覆盖为该值——例如,它可以永远是连接时使用的用户。
下面我们使用一个 fabfile 来模拟一下:
from fabric.api import env, run
env.user = 'implicit_user'
env.hosts = ['host1', 'explicit_user@host2', 'host3']
def print_user():
with hide('running'):
run('echo "%(user)s"' % env)
用于:
$ fab print_user
[host1] out: implicit_user
[explicit_user@host2] out: explicit_user
[host3] out: implicit_user
Done.
Disconnecting from host1... done.
Disconnecting from host2... done.
Disconnecting from host3... done.
如你所见,在 host2
上运行时 env.user
被设置为了 "explicit_user"
,但是之后又重新设置为原来的值("implicit_user"
)。
注解
``env.user``的使用有点让人困惑(它同时用于设置 和 信息展示目的),因此未来可能会对其进行修改——信息展示可能会另外采取一个新的 env 变量。
Execution model¶
If you’ve read the 概览 & 教程, you should already be familiar with how Fabric operates in the base case (a single task on a single host.) However, in many situations you’ll find yourself wanting to execute multiple tasks and/or on multiple hosts. Perhaps you want to split a big task into smaller reusable parts, or crawl a collection of servers looking for an old user to remove. Such a scenario requires specific rules for when and how tasks are executed.
This document explores Fabric’s execution model, including the main execution loop, how to define host lists, how connections are made, and so forth.
Execution strategy¶
Fabric defaults to a single, serial execution method, though there is an alternative parallel mode available as of Fabric 1.3 (see 并行执行). This default behavior is as follows:
- A list of tasks is created. Currently this list is simply the arguments given to fab, preserving the order given.
- For each task, a task-specific host list is generated from various sources (see How host lists are constructed below for details.)
- The task list is walked through in order, and each task is run once per host in its host list.
- Tasks with no hosts in their host list are considered local-only, and will always run once and only once.
Thus, given the following fabfile:
from fabric.api import run, env
env.hosts = ['host1', 'host2']
def taskA():
run('ls')
def taskB():
run('whoami')
and the following invocation:
$ fab taskA taskB
you will see that Fabric performs the following:
taskA
executed onhost1
taskA
executed onhost2
taskB
executed onhost1
taskB
executed onhost2
While this approach is simplistic, it allows for a straightforward composition of task functions, and (unlike tools which push the multi-host functionality down to the individual function calls) enables shell script-like logic where you may introspect the output or return code of a given command and decide what to do next.
Defining tasks¶
For details on what constitutes a Fabric task and how to organize them, please see 定义任务.
Defining host lists¶
Unless you’re using Fabric as a simple build system (which is possible, but not the primary use-case) having tasks won’t do you any good without the ability to specify remote hosts on which to execute them. There are a number of ways to do so, with scopes varying from global to per-task, and it’s possible mix and match as needed.
Hosts¶
Hosts, in this context, refer to what are also called “host strings”: Python
strings specifying a username, hostname and port combination, in the form of
username@hostname:port
. User and/or port (and the associated @
or
:
) may be omitted, and will be filled by the executing user’s local
username, and/or port 22, respectively. Thus, admin@foo.com:222
,
deploy@website
and nameserver1
could all be valid host strings.
IPv6 address notation is also supported, for example ::1
, [::1]:1222
,
user@2001:db8::1
or user@[2001:db8::1]:1222
. Square brackets
are necessary only to separate the address from the port number. If no
port number is used, the brackets are optional. Also if host string is
specified via command-line argument, it may be necessary to escape
brackets in some shells.
注解
The user/hostname split occurs at the last @
found, so e.g. email
address usernames are valid and will be parsed correctly.
During execution, Fabric normalizes the host strings given and then stores each part (username/hostname/port) in the environment dictionary, for both its use and for tasks to reference if the need arises. See 环境字典 env for details.
Roles¶
Host strings map to single hosts, but sometimes it’s useful to arrange hosts in groups. Perhaps you have a number of Web servers behind a load balancer and want to update all of them, or want to run a task on “all client servers”. Roles provide a way of defining strings which correspond to lists of host strings, and can then be specified instead of writing out the entire list every time.
This mapping is defined as a dictionary, env.roledefs
, which must be
modified by a fabfile in order to be used. A simple example:
from fabric.api import env
env.roledefs['webservers'] = ['www1', 'www2', 'www3']
Since env.roledefs
is naturally empty by default, you may also opt to
re-assign to it without fear of losing any information (provided you aren’t
loading other fabfiles which also modify it, of course):
from fabric.api import env
env.roledefs = {
'web': ['www1', 'www2', 'www3'],
'dns': ['ns1', 'ns2']
}
Role definitions are not necessary configuration of hosts only, but could hold
other role specific settings of your choice. This is achieved by defining the
roles as dicts and host strings under a hosts
key:
from fabric.api import env
env.roledefs = {
'web': {
'hosts': ['www1', 'www2', 'www3'],
'foo': 'bar'
},
'dns': {
'hosts': ['ns1', 'ns2'],
'foo': 'baz'
}
}
In addition to list/iterable object types, the values in env.roledefs
(or value of hosts
key in dict style definition) may be callables, and will
thus be called when looked up when tasks are run instead of at module load
time. (For example, you could connect to remote servers to obtain role
definitions, and not worry about causing delays at fabfile load time when
calling e.g. fab --list
.)
Use of roles is not required in any way – it’s simply a convenience in situations where you have common groupings of servers.
在 0.9.2 版更改: Added ability to use callables as roledefs
values.
How host lists are constructed¶
There are a number of ways to specify host lists, either globally or per-task, and generally these methods override one another instead of merging together (though this may change in future releases.) Each such method is typically split into two parts, one for hosts and one for roles.
Globally, via env
¶
The most common method of setting hosts or roles is by modifying two key-value
pairs in the environment dictionary, env: hosts
and roles
.
The value of these variables is checked at runtime, while constructing each
tasks’s host list.
Thus, they may be set at module level, which will take effect when the fabfile is imported:
from fabric.api import env, run
env.hosts = ['host1', 'host2']
def mytask():
run('ls /var/www')
Such a fabfile, run simply as fab mytask
, will run mytask
on host1
followed by host2
.
Since the env vars are checked for each task, this means that if you have the
need, you can actually modify env
in one task and it will affect all
following tasks:
from fabric.api import env, run
def set_hosts():
env.hosts = ['host1', 'host2']
def mytask():
run('ls /var/www')
When run as fab set_hosts mytask
, set_hosts
is a “local” task – its
own host list is empty – but mytask
will again run on the two hosts given.
注解
This technique used to be a common way of creating fake “roles”, but is less necessary now that roles are fully implemented. It may still be useful in some situations, however.
Alongside env.hosts
is env.roles
(not to be confused with
env.roledefs
!) which, if given, will be taken as a list of role names to
look up in env.roledefs
.
Globally, via the command line¶
In addition to modifying env.hosts
, env.roles
, and
env.exclude_hosts
at the module level, you may define them by passing
comma-separated string arguments to the command-line switches
--hosts/-H
and --roles/-R
, e.g.:
$ fab -H host1,host2 mytask
Such an invocation is directly equivalent to env.hosts = ['host1', 'host2']
– the argument parser knows to look for these arguments and will modify
env
at parse time.
注解
It’s possible, and in fact common, to use these switches to set only a
single host or role. Fabric simply calls string.split(',')
on the given
string, so a string with no commas turns into a single-item list.
It is important to know that these command-line switches are interpreted
before your fabfile is loaded: any reassignment to env.hosts
or
env.roles
in your fabfile will overwrite them.
If you wish to nondestructively merge the command-line hosts with your
fabfile-defined ones, make sure your fabfile uses env.hosts.extend()
instead:
from fabric.api import env, run
env.hosts.extend(['host3', 'host4'])
def mytask():
run('ls /var/www')
When this fabfile is run as fab -H host1,host2 mytask
, env.hosts
will
then contain ['host1', 'host2', 'host3', 'host4']
at the time that
mytask
is executed.
注解
env.hosts
is simply a Python list object – so you may use
env.hosts.append()
or any other such method you wish.
Per-task, via the command line¶
Globally setting host lists only works if you want all your tasks to run on the same host list all the time. This isn’t always true, so Fabric provides a few ways to be more granular and specify host lists which apply to a single task only. The first of these uses task arguments.
As outlined in fab 选项和参数, it’s possible to specify per-task arguments via a
special command-line syntax. In addition to naming actual arguments to your
task function, this may be used to set the host
, hosts
, role
or
roles
“arguments”, which are interpreted by Fabric when building host lists
(and removed from the arguments passed to the task itself.)
注解
Since commas are already used to separate task arguments from one another,
semicolons must be used in the hosts
or roles
arguments to
delineate individual host strings or role names. Furthermore, the argument
must be quoted to prevent your shell from interpreting the semicolons.
Take the below fabfile, which is the same one we’ve been using, but which doesn’t define any host info at all:
from fabric.api import run
def mytask():
run('ls /var/www')
To specify per-task hosts for mytask
, execute it like so:
$ fab mytask:hosts="host1;host2"
This will override any other host list and ensure mytask
always runs on
just those two hosts.
Per-task, via decorators¶
If a given task should always run on a predetermined host list, you may wish to
specify this in your fabfile itself. This can be done by decorating a task
function with the hosts
or roles
decorators. These decorators take a variable argument list, like so:
from fabric.api import hosts, run
@hosts('host1', 'host2')
def mytask():
run('ls /var/www')
They will also take an single iterable argument, e.g.:
my_hosts = ('host1', 'host2')
@hosts(my_hosts)
def mytask():
# ...
When used, these decorators override any checks of env
for that particular
task’s host list (though env
is not modified in any way – it is simply
ignored.) Thus, even if the above fabfile had defined env.hosts
or the call
to fab uses --hosts/-H
, mytask
would still run
on a host list of ['host1', 'host2']
.
However, decorator host lists do not override per-task command-line arguments, as given in the previous section.
Order of precedence¶
We’ve been pointing out which methods of setting host lists trump the others, as we’ve gone along. However, to make things clearer, here’s a quick breakdown:
- Per-task, command-line host lists (
fab mytask:host=host1
) override absolutely everything else. - Per-task, decorator-specified host lists (
@hosts('host1')
) override theenv
variables. - Globally specified host lists set in the fabfile (
env.hosts = ['host1']
) can override such lists set on the command-line, but only if you’re not careful (or want them to.) - Globally specified host lists set on the command-line (
--hosts=host1
) will initialize theenv
variables, but that’s it.
This logic may change slightly in the future to be more consistent (e.g.
having --hosts
somehow take precedence over env.hosts
in the
same way that command-line per-task lists trump in-code ones) but only in a
backwards-incompatible release.
Combining host lists¶
There is no “unionizing” of hosts between the various sources mentioned in
How host lists are constructed. If env.hosts
is set to ['host1', 'host2', 'host3']
,
and a per-function (e.g. via hosts
) host list is set to
just ['host2', 'host3']
, that function will not execute on host1
,
because the per-task decorator host list takes precedence.
However, for each given source, if both roles and hosts are specified, they will be merged together into a single host list. Take, for example, this fabfile where both of the decorators are used:
from fabric.api import env, hosts, roles, run
env.roledefs = {'role1': ['b', 'c']}
@hosts('a', 'b')
@roles('role1')
def mytask():
run('ls /var/www')
Assuming no command-line hosts or roles are given when mytask
is executed,
this fabfile will call mytask
on a host list of ['a', 'b', 'c']
– the
union of role1
and the contents of the hosts
call.
Host list deduplication¶
By default, to support Combining host lists, Fabric deduplicates the final host list so any given host string is only present once. However, this prevents explicit/intentional running of a task multiple times on the same target host, which is sometimes useful.
To turn off deduplication, set env.dedupe_hosts to
False
.
Excluding specific hosts¶
At times, it is useful to exclude one or more specific hosts, e.g. to override a few bad or otherwise undesirable hosts which are pulled in from a role or an autogenerated host list.
注解
As of Fabric 1.4, you may wish to use skip_bad_hosts instead, which automatically skips over any unreachable hosts.
Host exclusion may be accomplished globally with --exclude-hosts/-x
:
$ fab -R myrole -x host2,host5 mytask
If myrole
was defined as ['host1', 'host2', ..., 'host15']
, the above
invocation would run with an effective host list of ['host1', 'host3',
'host4', 'host6', ..., 'host15']
.
注解
Using this option does not modify
env.hosts
– it only causes the main execution loop to skip the requested hosts.
Exclusions may be specified per-task by using an extra exclude_hosts
kwarg,
which is implemented similarly to the abovementioned hosts
and roles
per-task kwargs, in that it is stripped from the actual task invocation. This
example would have the same result as the global exclude above:
$ fab mytask:roles=myrole,exclude_hosts="host2;host5"
Note that the host list is semicolon-separated, just as with the hosts
per-task argument.
Combining exclusions¶
Host exclusion lists, like host lists themselves, are not merged together
across the different “levels” they can be declared in. For example, a global
-x
option will not affect a per-task host list set with a decorator or
keyword argument, nor will per-task exclude_hosts
keyword arguments affect
a global -H
list.
There is one minor exception to this rule, namely that CLI-level keyword
arguments (mytask:exclude_hosts=x,y
) will be taken into account when
examining host lists set via @hosts
or @roles
. Thus a task function
decorated with @hosts('host1', 'host2')
executed as fab
taskname:exclude_hosts=host2
will only run on host1
.
As with the host list merging, this functionality is currently limited (partly to keep the implementation simple) and may be expanded in future releases.
Intelligently executing tasks with execute
¶
1.3 新版功能.
Most of the information here involves “top level” tasks executed via fab, such as the first example where we called fab taskA taskB
.
However, it’s often convenient to wrap up multi-task invocations like this into
their own, “meta” tasks.
Prior to Fabric 1.3, this had to be done by hand, as outlined in
作为库使用. Fabric’s design eschews magical behavior, so simply
calling a task function does not take into account decorators such as
roles
.
New in Fabric 1.3 is the execute
helper function, which takes a
task object or name as its first argument. Using it is effectively the same as
calling the given task from the command line: all the rules given above in
How host lists are constructed apply. (The hosts
and roles
keyword arguments to
execute
are analogous to CLI per-task arguments, including how they override all other host/role-setting
methods.)
As an example, here’s a fabfile defining two stand-alone tasks for deploying a Web application:
from fabric.api import run, roles
env.roledefs = {
'db': ['db1', 'db2'],
'web': ['web1', 'web2', 'web3'],
}
@roles('db')
def migrate():
# Database stuff here.
pass
@roles('web')
def update():
# Code updates here.
pass
In Fabric <=1.2, the only way to ensure that migrate
runs on the DB servers
and that update
runs on the Web servers (short of manual
env.host_string
manipulation) was to call both as top level tasks:
$ fab migrate update
Fabric >=1.3 can use execute
to set up a meta-task. Update the
import
line like so:
from fabric.api import run, roles, execute
and append this to the bottom of the file:
def deploy():
execute(migrate)
execute(update)
That’s all there is to it; the roles
decorators will be honored as expected, resulting in the following execution sequence:
migrate
ondb1
migrate
ondb2
update
onweb1
update
onweb2
update
onweb3
警告
This technique works because tasks that themselves have no host list (this
includes the global host list settings) only run one time. If used inside a
“regular” task that is going to run on multiple hosts, calls to
execute
will also run multiple times, resulting in
multiplicative numbers of subtask calls – be careful!
If you would like your execute
calls to only be called once, you
may use the runs_once
decorator.
Leveraging execute
to access multi-host results¶
In nontrivial Fabric runs, especially parallel ones, you may want to gather up a bunch of per-host result values at the end - e.g. to present a summary table, perform calculations, etc.
It’s not possible to do this in Fabric’s default “naive” mode (one where you
rely on Fabric looping over host lists on your behalf), but with execute
it’s pretty easy. Simply switch from calling the actual work-bearing task, to
calling a “meta” task which takes control of execution with execute
:
from fabric.api import task, execute, run, runs_once
@task
def workhorse():
return run("get my infos")
@task
@runs_once
def go():
results = execute(workhorse)
print results
In the above, workhorse
can do any Fabric stuff at all – it’s literally
your old “naive” task – except that it needs to return something useful.
go
is your new entry point (to be invoked as fab go
, or whatnot) and
its job is to take the results
dictionary from the execute
call and do
whatever you need with it. Check the API docs for details on the structure of
that return value.
Using execute
with dynamically-set host lists¶
A common intermediate-to-advanced use case for Fabric is to parameterize lookup
of one’s target host list at runtime (when use of Roles does not
suffice). execute
can make this extremely simple, like so:
from fabric.api import run, execute, task
# For example, code talking to an HTTP API, or a database, or ...
from mylib import external_datastore
# This is the actual algorithm involved. It does not care about host
# lists at all.
def do_work():
run("something interesting on a host")
# This is the user-facing task invoked on the command line.
@task
def deploy(lookup_param):
# This is the magic you don't get with @hosts or @roles.
# Even lazy-loading roles require you to declare available roles
# beforehand. Here, the sky is the limit.
host_list = external_datastore.query(lookup_param)
# Put this dynamically generated host list together with the work to be
# done.
execute(do_work, hosts=host_list)
For example, if external_datastore
was a simplistic “look up hosts by tag
in a database” service, and you wanted to run a task on all hosts tagged as
being related to your application stack, you might call the above like this:
$ fab deploy:app
But wait! A data migration has gone awry on the DB servers. Let’s fix up our migration code in our source repo, and deploy just the DB boxes again:
$ fab deploy:db
This use case looks similar to Fabric’s roles, but has much more potential, and is by no means limited to a single argument. Define the task however you wish, query your external data store in whatever way you need – it’s just Python.
The alternate approach¶
Similar to the above, but using fab
‘s ability to call multiple tasks in
succession instead of an explicit execute
call, is to mutate
env.hosts in a host-list lookup task and then call do_work
in the same session:
from fabric.api import run, task
from mylib import external_datastore
# Marked as a publicly visible task, but otherwise unchanged: still just
# "do the work, let somebody else worry about what hosts to run on".
@task
def do_work():
run("something interesting on a host")
@task
def set_hosts(lookup_param):
# Update env.hosts instead of calling execute()
env.hosts = external_datastore.query(lookup_param)
Then invoke like so:
$ fab set_hosts:app do_work
One benefit of this approach over the previous one is that you can replace
do_work
with any other “workhorse” task:
$ fab set_hosts:db snapshot
$ fab set_hosts:cassandra,cluster2 repair_ring
$ fab set_hosts:redis,environ=prod status
Failure handling¶
Once the task list has been constructed, Fabric will start executing them as outlined in Execution strategy, until all tasks have been run on the entirety of their host lists. However, Fabric defaults to a “fail-fast” behavior pattern: if anything goes wrong, such as a remote program returning a nonzero return value or your fabfile’s Python code encountering an exception, execution will halt immediately.
This is typically the desired behavior, but there are many exceptions to the
rule, so Fabric provides env.warn_only
, a Boolean setting. It defaults to
False
, meaning an error condition will result in the program aborting
immediately. However, if env.warn_only
is set to True
at the time of
failure – with, say, the settings
context
manager – Fabric will emit a warning message but continue executing.
Connections¶
fab
itself doesn’t actually make any connections to remote hosts. Instead,
it simply ensures that for each distinct run of a task on one of its hosts, the
env var env.host_string
is set to the right value. Users wanting to
leverage Fabric as a library may do so manually to achieve similar effects
(though as of Fabric 1.3, using execute
is preferred and more
powerful.)
env.host_string
is (as the name implies) the “current” host string, and is
what Fabric uses to determine what connections to make (or re-use) when
network-aware functions are run. Operations like run
or
put
use env.host_string
as a lookup key in a shared
dictionary which maps host strings to SSH connection objects.
注解
The connections dictionary (currently located at
fabric.state.connections
) acts as a cache, opting to return previously
created connections if possible in order to save some overhead, and
creating new ones otherwise.
Lazy connections¶
Because connections are driven by the individual operations, Fabric will not actually make connections until they’re necessary. Take for example this task which does some local housekeeping prior to interacting with the remote server:
from fabric.api import *
@hosts('host1')
def clean_and_upload():
local('find assets/ -name "*.DS_Store" -exec rm '{}' \;')
local('tar czf /tmp/assets.tgz assets/')
put('/tmp/assets.tgz', '/tmp/assets.tgz')
with cd('/var/www/myapp/'):
run('tar xzf /tmp/assets.tgz')
What happens, connection-wise, is as follows:
- The two
local
calls will run without making any network connections whatsoever; put
asks the connection cache for a connection tohost1
;- The connection cache fails to find an existing connection for that host
string, and so creates a new SSH connection, returning it to
put
; put
uploads the file through that connection;- Finally, the
run
call asks the cache for a connection to that same host string, and is given the existing, cached connection for its own use.
Extrapolating from this, you can also see that tasks which don’t use any network-borne operations will never actually initiate any connections (though they will still be run once for each host in their host list, if any.)
Closing connections¶
Fabric’s connection cache never closes connections itself – it leaves this up to whatever is using it. The fab tool does this bookkeeping for you: it iterates over all open connections and closes them just before it exits (regardless of whether the tasks failed or not.)
Library users will need to ensure they explicitly close all open connections
before their program exits. This can be accomplished by calling
disconnect_all
at the end of your script.
注解
disconnect_all
may be moved to a more public location in
the future; we’re still working on making the library aspects of Fabric
more solidified and organized.
Multiple connection attempts and skipping bad hosts¶
As of Fabric 1.4, multiple attempts may be made to connect to remote servers before aborting with an error: Fabric will try connecting env.connection_attempts times before giving up, with a timeout of env.timeout seconds each time. (These currently default to 1 try and 10 seconds, to match previous behavior, but they may be safely changed to whatever you need.)
Furthermore, even total failure to connect to a server is no longer an absolute
hard stop: set env.skip_bad_hosts to True
and in
most situations (typically initial connections) Fabric will simply warn and
continue, instead of aborting.
1.4 新版功能.
Password management¶
Fabric maintains an in-memory, two-tier password cache to help remember your
login and sudo passwords in certain situations; this helps avoid tedious
re-entry when multiple systems share the same password [1], or if a remote
system’s sudo
configuration doesn’t do its own caching.
The first layer is a simple default or fallback password cache,
env.password (which may also be set at the command line via
--password
or --initial-password-prompt
). This
env var stores a single password which (if non-empty) will be tried in the
event that the host-specific cache (see below) has no entry for the current
host string.
env.passwords (plural!) serves as a per-user/per-host cache, storing the most recently entered password for every unique user/host/port combination (note that you must include all three values if modifying the structure by hand - see the above link for details). Due to this cache, connections to multiple different users and/or hosts in the same session will only require a single password entry for each. (Previous versions of Fabric used only the single, default password cache and thus required password re-entry every time the previously entered password became invalid.)
Depending on your configuration and the number of hosts your session will connect to, you may find setting either or both of these env vars to be useful. However, Fabric will automatically fill them in as necessary without any additional configuration.
Specifically, each time a password prompt is presented to the user, the value
entered is used to update both the single default password cache, and the cache
value for the current value of env.host_string
.
[1] | We highly recommend the use of SSH key-based access instead of relying on homogeneous password setups, as it’s significantly more secure. |
Leveraging native SSH config files¶
Command-line SSH clients (such as the one provided by OpenSSH) make use of a specific configuration format typically
known as ssh_config
, and will read from a file in the platform-specific
location $HOME/.ssh/config
(or an arbitrary path given to
--ssh-config-path
/env.ssh_config_path.) This
file allows specification of various SSH options such as default or per-host
usernames, hostname aliases, and toggling other settings (such as whether to
use agent forwarding.)
Fabric’s SSH implementation allows loading a subset of these options from one’s
actual SSH config file, should it exist. This behavior is not enabled by
default (in order to be backwards compatible) but may be turned on by setting
env.use_ssh_config to True
at the top of your
fabfile.
If enabled, the following SSH config directives will be loaded and honored by Fabric:
User
andPort
will be used to fill in the appropriate connection parameters when not otherwise specified, in the following fashion:- Globally specified
User
/Port
will be used in place of the current defaults (local username and 22, respectively) if the appropriate env vars are not set. - However, if env.user/env.port are set, they
override global
User
/Port
values. - User/port values in the host string itself (e.g.
hostname:222
) will override everything, including anyssh_config
values.
- Globally specified
HostName
can be used to replace the given hostname, just like with regularssh
. So aHost foo
entry specifyingHostName example.com
will allow you to give Fabric the hostname'foo'
and have that expanded into'example.com'
at connection time.IdentityFile
will extend (not replace) env.key_filename.ForwardAgent
will augment env.forward_agent in an “OR” manner: if either is set to a positive value, agent forwarding will be enabled.ProxyCommand
will trigger use of a proxy command for host connections, just as with regularssh
.注解
If all you want to do is bounce SSH traffic off a gateway, you may find env.gateway to be a more efficient connection method (which will also honor more Fabric-level settings) than the typical
ssh gatewayhost nc %h %p
method of usingProxyCommand
as a gateway.注解
If your SSH config file contains
ProxyCommand
directives and you have set env.gateway to a non-None
value,env.gateway
will take precedence and theProxyCommand
will be ignored.If one has a pre-created SSH config file, rationale states it will be easier for you to modify
env.gateway
(e.g. viasettings
) than to work around your conf file’s contents entirely.
fab
选项和参数¶
The most common method for utilizing Fabric is via its command-line tool,
fab
, which should have been placed on your shell’s executable path when
Fabric was installed. fab
tries hard to be a good Unix citizen, using a
standard style of command-line switches, help output, and so forth.
基本应用¶
In its most simple form, fab
may be called with no options at all, and
with one or more arguments, which should be task names, e.g.:
$ fab task1 task2
As detailed in 概览 & 教程 and Execution model, this will run task1
followed by task2
, assuming that Fabric was able to find a fabfile nearby
containing Python functions with those names.
However, it’s possible to expand this simple usage into something more flexible, by using the provided options and/or passing arguments to individual tasks.
直接执行远程命令¶
0.9.2 新版功能.
Fabric 还实现了一个鲜为人知的命令行接口,可以像下面这样调用:
$ fab [options] -- [shell command]
--
之后的所有字符都会用于创建一个 run
临时调用,是不是做为 fab
的参数解析。如果你在模块级或者命令行中设置了的主机列表参数,它的执行方式将类似单行的匿名任务。
例如:假设你想要获取多个系统的内核信息,可以这样做:
$ fab -H system1,system2,system3 -- uname -a
它的作用完全等价于下面的 fabfile:
from fabric.api import run
def anonymous():
run("uname -a")
像这样执行:
$ fab -H system1,system2,system3 anonymous
大多数情况下,你更愿意将(仅执行一次,不大会再度执行的)任务写入 fabfile,而该特性基于 fabfile 连接设置,提供了一个方便、快捷的 SSH 命令传输。
命令行参数¶
A quick overview of all possible command line options can be found via fab
--help
. If you’re looking for details on a specific option, we go into detail
below.
注解
fab
uses Python’s optparse library, meaning that it honors typical
Linux or GNU style short and long options, as well as freely mixing options
and arguments. E.g. fab task1 -H hostname task2 -i path/to/keyfile
is
just as valid as the more straightforward fab -H hostname -i
path/to/keyfile task1 task2
.
-
-a
,
--no_agent
¶
Sets env.no_agent to
True
, forcing our SSH layer not to talk to the SSH agent when trying to unlock private key files.0.9.1 新版功能.
-
-A
,
--forward-agent
¶
Sets env.forward_agent to
True
, enabling agent forwarding.1.4 新版功能.
-
--abort-on-prompts
¶
Sets env.abort_on_prompts to
True
, forcing Fabric to abort whenever it would prompt for input.1.1 新版功能.
-
-c
RCFILE
,
--config
=RCFILE
¶ Sets env.rcfile to the given file path, which Fabric will try to load on startup and use to update environment variables.
-
-d
COMMAND
,
--display
=COMMAND
¶ Prints the entire docstring for the given task, if there is one. Does not currently print out the task’s function signature, so descriptive docstrings are a good idea. (They’re always a good idea, of course – just moreso here.)
-
--connection-attempts
=M
,
-n
M
¶ Set number of times to attempt connections. Sets env.connection_attempts.
1.4 新版功能.
-
-D
,
--disable-known-hosts
¶
Sets env.disable_known_hosts to
True
, preventing Fabric from loading the user’s SSHknown_hosts
file.
-
-f
FABFILE
,
--fabfile
=FABFILE
¶ The fabfile name pattern to search for (defaults to
fabfile.py
), or alternately an explicit file path to load as the fabfile (e.g./path/to/my/fabfile.py
.)
-
-F
LIST_FORMAT
,
--list-format
=LIST_FORMAT
¶ Allows control over the output format of
--list
.short
is equivalent to--shortlist
,normal
is the same as simply omitting this option entirely (i.e. the default), andnested
prints out a nested namespace tree.1.1 新版功能.
-
-g
HOST
,
--gateway
=HOST
¶ Sets env.gateway to
HOST
host string.1.5 新版功能.
-
-h
,
--help
¶
Displays a standard help message, with all possible options and a brief overview of what they do, then exits.
-
--hide
=LEVELS
¶ A comma-separated list of output levels to hide by default.
-
-x
HOSTS
,
--exclude-hosts
=HOSTS
¶ Sets env.exclude_hosts to the given comma-delimited list of host strings to then keep out of the final host list.
1.1 新版功能.
-
-i
KEY_FILENAME
¶ When set to a file path, will load the given file as an SSH identity file (usually a private key.) This option may be repeated multiple times. Sets (or appends to) env.key_filename.
-
-I
,
--initial-password-prompt
¶
Forces a password prompt at the start of the session (after fabfile load and option parsing, but before executing any tasks) in order to pre-fill env.password.
This is useful for fire-and-forget runs (especially parallel sessions, in which runtime input is not possible) when setting the password via
--password
or by setting env.password in your fabfile, is undesirable.注解
The value entered into this prompt will overwrite anything supplied via env.password at module level, or via
--password
.
-
-k
¶
Sets env.no_keys to
True
, forcing the SSH layer to not look for SSH private key files in one’s home directory.0.9.1 新版功能.
-
--keepalive
=KEEPALIVE
¶ Sets env.keepalive to the given (integer) value, specifying an SSH keepalive interval.
1.1 新版功能.
-
--linewise
¶
Forces output to be buffered line-by-line instead of byte-by-byte. Often useful or required for parallel execution.
1.3 新版功能.
-
-l
,
--list
¶
Imports a fabfile as normal, but then prints a list of all discovered tasks and exits. Will also print the first line of each task’s docstring, if it has one, next to it (truncating if necessary.)
在 0.9.1 版更改: Added docstring to output.
-
-p
PASSWORD
,
--password
=PASSWORD
¶ Sets env.password to the given string; it will then be used as the default password when making SSH connections or calling the
sudo
program.
-
-P
,
--parallel
¶
Sets env.parallel to
True
, causing tasks to run in parallel.1.3 新版功能.
参见
-
--no-pty
¶
Sets env.always_use_pty to
False
, causing allrun
/sudo
calls to behave as if one had specifiedpty=False
.1.0 新版功能.
-
-r
,
--reject-unknown-hosts
¶
Sets env.reject_unknown_hosts to
True
, causing Fabric to abort when connecting to hosts not found in the user’s SSHknown_hosts
file.
-
--set
KEY=VALUE,...
¶ Allows you to set default values for arbitrary Fabric env vars. Values set this way have a low precedence – they will not override more specific env vars which are also specified on the command line. E.g.:
fab --set password=foo --password=bar
will result in
env.password = 'bar'
, not'foo'
Multiple
KEY=VALUE
pairs may be comma-separated, e.g.fab --set var1=val1,var2=val2
.Other than basic string values, you may also set env vars to True by omitting the
=VALUE
(e.g.fab --set KEY
), and you may set values to the empty string (and thus a False-equivalent value) by keeping the equals sign, but omittingVALUE
(e.g.fab --set KEY=
.)1.4 新版功能.
-
-s
SHELL
,
--shell
=SHELL
¶ Sets env.shell to the given string, overriding the default shell wrapper used to execute remote commands.
-
--shortlist
¶
Similar to
--list
, but without any embellishment, just task names separated by newlines with no indentation or docstrings.0.9.2 新版功能.
参见
-
--show
=LEVELS
¶ A comma-separated list of output levels to be added to those that are shown by default.
-
--ssh-config-path
¶
Sets env.ssh_config_path.
1.4 新版功能.
-
--skip-bad-hosts
¶
Sets env.skip_bad_hosts, causing Fabric to skip unavailable hosts.
1.4 新版功能.
-
--skip-unknown-tasks
¶
Sets env.skip_unknown_tasks, causing Fabric to skip unknown tasks.
-
--timeout
=N
,
-t
N
¶ Set connection timeout in seconds. Sets env.timeout.
1.4 新版功能.
-
--command-timeout
=N
,
-T
N
¶ Set remote command timeout in seconds. Sets env.command_timeout.
1.6 新版功能.
-
-u
USER
,
--user
=USER
¶ Sets env.user to the given string; it will then be used as the default username when making SSH connections.
-
-V
,
--version
¶
Displays Fabric’s version number, then exits.
-
-w
,
--warn-only
¶
Sets env.warn_only to
True
, causing Fabric to continue execution even when commands encounter error conditions.
-
-z
,
--pool-size
¶
Sets env.pool_size, which specifies how many processes to run concurrently during parallel execution.
1.3 新版功能.
参见
Per-task arguments¶
The options given in 命令行参数 apply to the invocation of
fab
as a whole; even if the order is mixed around, options still apply to
all given tasks equally. Additionally, since tasks are just Python functions,
it’s often desirable to pass in arguments to them at runtime.
Answering both these needs is the concept of “per-task arguments”, which is a special syntax you can tack onto the end of any task name:
- Use a colon (
:
) to separate the task name from its arguments; - Use commas (
,
) to separate arguments from one another (may be escaped by using a backslash, i.e.\,
); - Use equals signs (
=
) for keyword arguments, or omit them for positional arguments. May also be escaped with backslashes.
Additionally, since this process involves string parsing, all values will end up as Python strings, so plan accordingly. (We hope to improve upon this in future versions of Fabric, provided an intuitive syntax can be found.)
For example, a “create a new user” task might be defined like so (omitting most of the actual logic for brevity):
def new_user(username, admin='no', comment="No comment provided"):
print("New User (%s): %s" % (username, comment))
pass
You can specify just the username:
$ fab new_user:myusername
Or treat it as an explicit keyword argument:
$ fab new_user:username=myusername
If both args are given, you can again give them as positional args:
$ fab new_user:myusername,yes
Or mix and match, just like in Python:
$ fab new_user:myusername,admin=yes
The print
call above is useful for illustrating escaped commas, like
so:
$ fab new_user:myusername,admin=no,comment='Gary\, new developer (starts Monday)'
注解
Quoting the backslash-escaped comma is required, as not doing so will cause shell syntax errors. Quotes are also needed whenever an argument involves other shell-related characters such as spaces.
All of the above are translated into the expected Python function calls. For example, the last call above would become:
>>> new_user('myusername', admin='yes', comment='Gary, new developer (starts Monday)')
Roles and hosts¶
As mentioned in the section on task execution,
there are a handful of per-task keyword arguments (host
, hosts
,
role
and roles
) which do not actually map to the task functions
themselves, but are used for setting per-task host and/or role lists.
These special kwargs are removed from the args/kwargs sent to the task function itself; this is so that you don’t run into TypeErrors if your task doesn’t define the kwargs in question. (It also means that if you do define arguments with these names, you won’t be able to specify them in this manner – a regrettable but necessary sacrifice.)
注解
If both the plural and singular forms of these kwargs are given, the value of the plural will win out and the singular will be discarded.
When using the plural form of these arguments, one must use semicolons (;
)
since commas are already being used to separate arguments from one another.
Furthermore, since your shell is likely to consider semicolons a special
character, you’ll want to quote the host list string to prevent shell
interpretation, e.g.:
$ fab new_user:myusername,hosts="host1;host2"
Again, since the hosts
kwarg is removed from the argument list sent to the
new_user
task function, the actual Python invocation would be
new_user('myusername')
, and the function would be executed on a host list
of ['host1', 'host2']
.
配置文件¶
Fabric currently honors a simple user settings file, or fabricrc
(think
bashrc
but for fab
) which should contain one or more key-value pairs,
one per line. These lines will be subject to string.split('=')
, and thus
can currently only be used to specify string settings. Any such key-value pairs
will be used to update env when fab
runs, and is loaded prior
to the loading of any fabfile.
By default, Fabric looks for ~/.fabricrc
, and this may be overridden by
specifying the -c
flag to fab
.
For example, if your typical SSH login username differs from your workstation
username, and you don’t want to modify env.user
in a project’s fabfile
(possibly because you expect others to use it as well) you could write a
fabricrc
file like so:
user = ssh_user_name
Then, when running fab
, your fabfile would load up with env.user
set to
'ssh_user_name'
. Other users of that fabfile could do the same, allowing
the fabfile itself to be cleanly agnostic regarding the default username.
Fabfile 文件的结构和使用¶
本文档介绍了 fabfile 的使用,以及各式各样的 fabfile 示例,其中不乏最佳实践和反面教材。
指定 fabfile¶
Fabric 能够加载 Python 模块(如: fabfile.py
)和包(如 fabfile/
),默认情况下,它会根据 Python 包的导入机制加载 fabfile
-可以是 fabfile/
也可以是 fabfile.py
。
根据 fabfile 的搜寻机制,Fabric 会依次查找用户当前目录以及其上层目录,因此在项目中使用时,可以把 fabfile.py
置于项目的根目录,这样无论进入项目中的任何目录时,调用 fab
命令都可以找到这个 fabfile
配置。
你要可以在命令行中通过 -f
参数,或者在 fabricrc 中指定 fabfile
文件名。例如,想要使用 fab_tasks.py
作为 fabfile 的文件名,你只需要在创建它后输入 fab -f fab_tasks.py <task name>
,或者在 ~/.fabricrc
中添加 fabfile = fab_tasks.py
。
如果指定的 fabfile 文件名中包含了路径元素(比如: ../fabfile.py
或者 /dir1/dir2/custom_fabfile
),而不只是文件名,Fabric 将直接找到该文件,不做任何搜索。这种情况下同样接受波浪线表达式,也就是说你可以这样指定: ~/personal_fabfile.py
。
注解
Fabric 通过 import
(实际上是 __import__
)来获取配置文件内容——而不是 eval
或者类似的方式。它的实现方式是,将 fabfile 所在目录加入 Python 的加载目录(当然之后会将它删去)。
在 0.9.2 版更改: 支持加载 fabfile 包。
引用 Fabric¶
Fabric 本质上依然是 Python,因此你 可以 随意地调用它的组件。不过,处于封装和便捷性(以及 Fabric 脚本的易用性)考虑,Fabric 的公开 API 由 fabric.api
模块维护。
Fabric 的 业务(Operation) 、上下文管理器 、 装饰器 以及 实用工具 都是本模块的名字空间,为 fabfile 提供了一套简单并且统一的接口。你可以像这样使用:
from fabric.api import *
# call run(), sudo(), etc etc
严格来说,这样并不符合最佳实践(因为 a number of reasons ),如果你只需要使用少数几个 Fab API,务必 明确导入: :: from fabric.api import env, run
。但是在大多数 fabfile 中,都使用了其中大多数 API,这时 import *
from fabric.api import *
比下面的写法要更易于读写:
from fabric.api import abort, cd, env, get, hide, hosts, local, prompt, \
put, require, roles, run, runs_once, settings, show, sudo, warn
在上面的例子中,相比最优范式,我们可以更加实用主义一些。
定义任务并导入 callable 任务¶
对于 Fabric 来说怎样才算是任务,以及 Fabric 何时导入 fabfile ,请阅读 Execution model 文档的 定义任务 章节。
与远程程序集成¶
Fabric 的核心业务 run
和 sudo
都支持将本地的输入发送至远程,其表现形式和 ssh
基本一致。例如,有时候会遇到需要密码的情况(比如 dump 数据库,或者修改用户密码时),程序会提供近乎直接的交互环境。
然而,由于 ssh
本身的限制,Fabric 对于该功能的实现并不能保证直观。这篇文档将详细地讨论这些问题。
合并 stdout 和 stderr¶
首先摆在我们面前的是 stdout 和 stderr 流问题,以及他们为什么要根据需求分开或者合并。
缓冲¶
Fabric 0.9.x 包括更早版本,以及 Python 本身,都是通过将缓冲区一行行地输出:直到遇到换行符才会将这一整行输出给用户。在大多数情况下这都能正常工作,但是在这样需要输出半行的情况就很难处理。
注解
行级输出缓冲可能导致程序无理由地挂起或冻结,比如输出没有后继行的文本、等待用户输入及确认。
新版本的 Fabric 基于字节缓冲输入和输出,这样就可以支持输入提示。同时还方便了和利用了 “curses” 库的复杂程序或者会重新绘制屏幕的程序(比如 top
)集成。
交叉输出流¶
不幸的是,(像很多其它程序那样)同时打印 stderr 和 stdout 将会导致每次只能输出两者的一个字节,最终的结果互相交叉,混乱地混合在一起。这时如果使用行级输出,虽然仍然是一个严重的问题,但会比另一者好得多。
为了解决这个问题,Fabric 在 SSH 层通过配置,在更低的层面合并两条输出流,保证输出能够更加自然一些。这项设置对应 Fabric 环境变量以及关键字参数 combine_stderr,其默认值是 True
。
得益于这项默认设置,才保证输出的正确,但是这会导致 run
/sudo`返回值的 `
.stderr`` 属性为空,因为全部被合并进了 stdout。
反过来,如果用户要求清晰的 Python 级 stderr 输出流,而不在乎用户(或者其它处理命令输出的 stdout 和 stderr 的事物)将要面对混乱的输出,可以根据需要选择将其设置为 False
。
伪终端¶
处理提示交互的另一个大问题在于输出用户自身的输入。
重播(echo)¶
一般的终端程序或者 bona fide 文本终端(例如没有 GUI 的 Unix 系统)将会提供一个被称为 tty 或者 pty(即 pseudo-terminal,伪终端)的程序,它将会(通过 stdout)自动重播用户所输入的全部文本,如果没有的就太难使用了。终端也可以选择关闭重播,比如请求用户安全密码时。
不过,如果是没有 tty 或者 pty 的程序(比如 cron 定时任务),在这种情况下,程序所接收到的数据都不会被重播回来,这是为了程序能够在没有人类监控的情况下也能完成任务,也是 老版本 Fabric 的默认行为。
Fabric 的实现方法¶
不幸的是,使用 Fabric 执行命令的上下文环境,没有用于重复用户输入的 pty,Fabric 只能自实现输出,对于很多程序来说,这已经足够了,但是在请求密码时会有安全问题。
本着对安全的重视和最小惊讶原则(目前为止,用户的体验都如同时在模拟终端中操作一样),Fabric 1.0 以及以上版本默认强制启用 pty,Fabric 简单地由远程端决定是重播还是隐藏输入,而不是自己实现重播。
注解
为了支持普通输出行为,使用 pty 还意味着附属在终端设备上时的行为会有所差异。例如,对于彩色化终端输出但不处理后台输出的程序,这时将会输出彩色输出。在检查 run
和 sudo
的输出时需要保持警惕!
如果想要关闭 pty 行为,可以使用命令行参数 --no-pty
和环境变量 always_use_pty。
两者结合¶
最后需要提到的是,时刻记住伪终端的使用十分依赖 stdout 和 stderr 的组合——就像 combine_stderr 的设置那样。这是因为终端设备会将 stdout 和 stderr 发送向同一个地方——用户屏幕——因此要将它们区分开来是不可能做到的。
然而,在 Fabric 级,这两组设置互相独立,并且可以通过多种方式组合使用。默认情况下,两者的值都为 True
,其它组合的效果如下:
run("cmd", pty=False, combine_stderr=True)
:Fabric 会自己处理所有 stdin,包括密码以及潜在的改变cmd
行为。 当cmd
在 pty 中执行很不方便,而且不必关心密码输入时会很有效。run("cmd", pty=False, combine_stderr=False)
:两项设置都为False
时,Fabric 会使用 stdin 而不会生成一个 pty——这对于大多数稍微复杂的命令来说,很可能会导致意料之外的行为,但是这是唯一能够直接接入远程 stderr 流的方法,所以在某些情况下也会游泳有用。run("cmd", pty=True, combine_stderr=False)
: 合法,但并没有什么不同,因为pty=True
会导致输出流合并,在需要避免combine_stderr
的某些特殊边界情况下(目前没有已知用例)可能会有用。
作为库使用¶
从文档中我们可以看出,Fabric 最主要的应用场景是通过 fab 命令来引用 fabfile ,然而 Fabric 的内部实现在保证它在不使用 fab
和 fabfile 的场合也非常易于使用——本文档将会详细向你介绍。
此外还有写需要时刻谨记的事情,比如:编写好 fabfile ,运行 fab
命令时是怎样创建并断开连接的。
连接服务器¶
前面我们已经介绍过 Fabric 是怎样连接主机的,不过仍有些知识埋藏在 运行 文档中,具体来说你可能需要快速浏览一遍 Connections 章节的文档。(虽然不必要,但我强烈建议你把整个文档都快速浏览一遍。)
如那些章节中所提到的, run
和 sudo
这样的操作在连接时都会查看同一处设置: env.host_string 。其它设置主机列表的机制都用于 fab
命令,和作为 Python 库使用没有关系。
也就是说,在 Fabric 1.3 中,如果你想要结合任务 X
和主机列表 Y
,可以使用 execute
,就像这样: execute(X, hosts=Y)
,详细介绍请访问 execute
文档——手动操作主机列表功能真的很有必要。
断开连接¶
fab
所做另一件重要的事是,会在会话结束的时候断开所有连接,否则 Python 程序将永远等待网络资源的释放。
在 Fabric 0.9.4 或更新版本中,你可以使用这个函数方便地实现这个功能: disconnect_all
,只需要保证程序结束的时候调用该方法(通常在 try: finally
表达式中,以防意外的错误导致无法释放连接)即可。
如果你使用的是 Fabric 0.9.3 或更早之前的版本,你可以这样做( disconnect_all
仅仅提供了更好看的输出):
from fabric.state import connections
for key in connections.keys():
connections[key].close()
del connections[key]
最后注意¶
本文档只是个草案,因此并不会详细覆盖作为 Fabric 库导入与使用 fab
命令之间的全部区别,不过上面已经列出了其中最需要注意的陷阱。在不确定如何使用时,可以参考 Fabric 源代码中 fabric/main.py
部分,fab
命令的实现主要在这里,相信会是很有用的参考。
输出管理¶
fab
的输出默认情况下是非常复杂的,几乎是将尽可能地输出所有能输出的内容,包括远程的 stderr 和 stdout 输出流、被执行的命令,等等。在很多情况下,为了了解远程的执行进度这是必须的,但是对于复杂的 Fabric 任务,很快就跟不上它的运行状态。
输出等级¶
为了改进任务输出,Fabric 的输出会被整合进一系列基本独立的层级或者集合,其中每一个都以独立开启或关闭。这为用户端的输出提供了灵活的控制。
注解
所有层级的输出在 debug
和 exceptions
情况下默认都会保存下来。
标准输出层级¶
标准的原子输出层级/集合包括以下:
status:状态信息。包括提示 Fabric 已结束运行、用户是否使用键盘中止操作、或者服务器是否断开了连接。通常来说这些信息都不会很冗长,但是至关重要。
aborts:终止信息。和状态信息一样,只有当 Fabric 做为库使用的时候才可能应该关闭,而且还并不一定。注意,即使该输出集被关闭了,并不能阻止程序退出——你只会得不到任何 Fabric 退出的原因。
warnings:警报信息。通常在预计指定操作失败时会将其关闭,比如说你可能使用
grep
来测试文件中是否有特定文字。如果设置env.warn_only
为True
会导致远程程序执行失败时完全没有警报信息。和aborts
一样,这项设置本身并不控制警报行为,仅用于是否输出警报信息。running:输出正在执行的命令或者正在传输的文件名称,比如:
[myserver] run: ls /var/www
。同时它还输出正在运行的任务名,比如:[myserver] Executing task 'foo'
。stdout:本地或远程的 stdout。来自命令行的非错误输出。
stderr:本地或远程的 stderr。比如命令中错误相关的输出。
在 0.9.2 版更改: running
输出级别中新增 “Executing task” 行。
在 0.9.2 版更改: 添加 user
输出级别。
调试输出¶
在调试问题的时候还有几个可用的原子输出级别:debug
,和其它的都有所不同;exceptions
,其行为只能包含在 debug
中,不过也可以单独设定。
debug:开启调试模式(默认是关闭的)。现在它通常是用于浏览正在执行的“全部”命令,以这个
run
调用为例:run('ls "/home/username/Folder Name With Spaces/"')
通常情况下
running
会详细显示run
所接收到的内容,就像这样:[hostname] run: ls "/home/username/Folder Name With Spaces/"
开启
debug
模式,同时保证 shell 设置是 ``True`,你将会看到传向远程服务器的所有字符都会输出出来:[hostname] run: /bin/bash -l -c "ls \"/home/username/Folder Name With Spaces\""
启用
debug
时输出同时还会显示推出时的完整的 Python traceback(如果exceptions
也启用了的话)。注解
修改其它输出(比如上面例子中修改“running”那一行来显示 shell 以及任何转译字符),这项设置的优先级处于最高;因此如果
running
为 False 但debug
为 True,你还是可以在调试区看到 “running”那一行。exceptions:异常发生时是否显示 traceback。如果你对详细的错误信息感兴趣,但
debug
为False
时可以使用。
在 1.0 版更改: 终止时的调试输出现在会包含整个 Python traceback。
在 1.11 版更改: 新增 exceptions
输出级别。
输出级别的别名¶
作为对上述原子/独立级别的补充,Fabric 还提供了一系列方便的对应多个级别的别名,这可能会涉及到一些其它级别涉及的地方,可以用于高效地切换它所对应的级别的状态。
output:对应
stdout
和stderr
。如果你只关心“运行”进度和自己设置的输出(和警报),会觉得它很方便。everything:包括
warnings
、running
、user
和output
(见上面介绍)。因此,关闭everything
,你将只能够看到零星输出(只有status
和debug
,如果它们是开启状态的话),以及自己的打印信息。commands:包含
stdout
和running
。适合用于隐藏无错误的命令,只显示所有 stderr 输出。
在 1.4 版更改: 新增 commands
的输出别名。
隐藏和/或显示输出级别¶
你可以通过多种方式切换 Fabric 的输出层级,你可以看看下面每条对应的 API 文档作为例子:
直接修改 fabric.state.output:
fabric.state.output
是字典的子类(类似于 env),以输出层级名为键,值为真(显示某个层级的输出)或假(隐藏)。fabric.state.output
是最底层的输出层级实现,也是 Fabric 决定是否输出的直接引用。上下文管理器:
hide
和show
是决定被包含的命令输出是隐藏还是显示的两个上下文管理器,接受一个或多个层级字符串名做为参数。和其它上下文管理器一样,退出被包含的代码块时,设置会恢复原状。命令行参数:你可以使用 fab 选项和参数
--hide
以及/或者--show
,其效果正如其名(不过,如你所想,会是全局应用),其参数应当是逗号分隔的字符输入。
并行执行¶
1.3 新版功能.
默认情况下,Fabric 会默认 顺序 执行所有任务(详细信息参见 Execution strategy ),这篇文档将介绍 Fabric 如何在多个主机上 并行 执行任务,包括 Fabric 参数设置、任务独立的装饰器,以及命令行全局控制。
它是如何运转的¶
由于 Fabric 1.x 并不是完全线程安全(以及为了更加通用,任务函数之间并不会产生交互),该功能的实现是基于 Python multiprocessing 模块,它会为每一个主机和任务组合创建一个线程,同时提供了一个(可选的)弹窗用于阻止创建过多的进程。
举个例子,假设你正打算更新数台服务器上的 Web 应用代码,所有服务的代码都更新后开始重启服务器(这样代码更新失败的时候比较容易回滚)。你可能会写出下面这样的代码:
from fabric.api import *
def update():
with cd("/srv/django/myapp"):
run("git pull")
def reload():
sudo("service apache2 reload")
在三台服务器上并行执行,就像这样:
$ fab -H web1,web2,web3 update reload
刚常见的情况是没有启动任何并行执行参数,Fabric 将会按顺序在服务器上执行:
在
web1
上更新
在
web2
上更新
在
web3
上更新
在
web1
上重新加载配置
在
web2
上重新加载配置
在
web3
上重新加载配置
如果激活并行执行(通过 -P
——下面会详细介绍)它将变成这样:
在
web1
、web3
和web3
上更新
在
web1
、web2
和web3
上重新加载配置
。
这样做的好处非常明显——如果 update
花费 5 秒 reload
花费 2 秒顺序执行总共会花费 (5+2)*3 = 21 秒,而并行执行只需要它的 1/3,也就是 (5+2) = 7 秒。
如何使用¶
装饰器¶
由于并行执行影响的最小单位是任务,所以功能的启用或禁用也是以任务为单位使用 parallel
或 serial
装饰器。以下面这个 fabfile 为例:
from fabric.api import *
@parallel
def runs_in_parallel():
pass
def runs_serially():
pass
如果这样执行:
$ fab -H host1,host2,host3 runs_in_parallel runs_serially
将会按照这样的流程执行:
runs_in_parallel
运行在host1
、host2
和host3
上runs_serially
运行在host1
上runs_serially
运行在host2
上runs_serially
运行在host3
上
命令行参数¶
你也可以使用命令行选项 -P
或者环境变量 env.parallel <env-parallel>强制所有任务并行执行。不过被装饰器 `~fabric.decorators.serial 封装的任务会忽略该设置,仍旧保持顺序执行。
例如,下面的 fabfile 会产生和上面同样的执行顺序:
from fabric.api import *
def runs_in_parallel():
pass
@serial
def runs_serially():
pass
在这样调用时:
$ fab -H host1,host2,host3 -P runs_in_parallel runs_serially
和上面一样,runs_in_parallel
将会并行执行,runs_serially
顺序执行。
bubble 大小¶
主机列表很大时,用户的机器可能会因为并发运行了太多的 Fabric 进程而被压垮,因此,你可能会选择 moving bubble 方法来限制 Fabric 并发执行的活跃进程数。
默认情况下没有使用 bubble 限制,所有主机都运行在并发池中。你可以在任务级别指定 parallel
的关键字参数 pool_size
来覆盖该设置,或者使用选项 -z
全局设置。
例如同时在 5 个主机上运行:
from fabric.api import *
@parallel(pool_size=5)
def heavy_task():
# lots of heavy local lifting or lots of IO here
或者不使用关键字参数 pool_size
:
$ fab -P -z 5 heavy_task
SSH 行为¶
Fabric 使用纯 Python 实现的 SSH 库管理连接,也就是说可能会因此由于库的兼容性限制出现问题。下面几种情况下不能保证 Fabric 一些正常,或者和 ssh
控制台命令一样灵活。
未知主机¶
SSH 的主机密钥 tracking 机制会纪录所有你打算连接的主机,并将主机的认证信息(一般是 IP 地址,但有时也可以是域名)和 SSH 密钥映射并保存在 ~/.ssh/known_hosts
文件中。(对其工作方式感兴趣请参阅 OpenSSH 文档 。)
paramiko
库会加载 known_hosts
文件,并尝试将它和你要连接的主机映射起来,并提供参数设置用于决定连接未知主机(主机名或者 IP 不存在于 known_hosts
文件中)时的行为:
Reject :在不安全时拒绝连接。它将抛出一个 Python 异常,因而终止 Fabric 会话,并输出“未知主机”信息。
Add :将新的主机密钥添加到内存中的已知主机列表,然后一切如常地继续 SSH 连接。注意,它并不会修改你的
known_hosts
文件。Ask 并不是 Fabric 中实现的,而是
paramiko
库提供的选项,它会询问用户是否接受该未知主机的密钥。
在 Fabric 中控制究竟是拒绝连接韩式添加主机的选项是 env.reject_unknown_hosts ,方便起见其默认值是 False
,我们认为这是安全和便利之间合适的折中方案,对此有异议可以在 fabfile 中设置 env.reject_unknown_hosts = True
重设安全等级。
已知主机但更换了密钥¶
SSH 密钥/指纹认证机制的目的在于检测中间人攻击:如果攻击者将你的 SSH 流量转向他控制的计算机,并将其伪装为你的目的主机,将会检测到主机密钥不匹配。因此 SSH (及其 Python 实现)发现主机密钥与 known_hosts
文件中纪录不一致时,都默认立即拒绝连接。
在某些情况下,比如部署 EC2 时,你可能会打算忽略该问题,我们目前所采用的 SSH 层并没有提供对该操作的明确控制,但是可以通过跳过 known_hosts
文件的加载过程——如果 known_hosts
文件为空,则不会出现纪录不一致的问题。如果你需要这样做,可以设置 env.disable_known_hosts 为 True
,其默认值为 False
以遵从 SSH 的默认设置。
警告
启用 env.disable_known_hosts 会使你暴露在中间人攻击中!请小心使用。
定义任务¶
在 Fabric 1.1 中存在两种定义 fabfile 中对象为任务的方式:
“从 1.1 版本后支持的新”方法需要是
Task
或其子类的实现,and also descends into imported modules to allow building nested namespaces.- The “classic” method from 1.0 and earlier considers all public callable objects (functions, classes etc) and only considers the objects in the fabfile itself with no recursing into imported module.
注解
These two methods are mutually exclusive: if Fabric finds any
new-style task objects in your fabfile or in modules it imports, it will
assume you’ve committed to this method of task declaration and won’t
consider any non-Task
callables. If no new-style tasks
are found, it reverts to the classic behavior.
下面的文档将详细探究这两种方法。
注解
To see exactly what tasks in your fabfile may be executed via fab
, use
fab --list
.
新式任务¶
Fabric 1.1 introduced the Task
class to facilitate new features
and enable some programming best practices, specifically:
- Object-oriented tasks. Inheritance and all that comes with it can make for much more sensible code reuse than passing around simple function objects. The classic style of task declaration didn’t entirely rule this out, but it also didn’t make it terribly easy.
- Namespaces. Having an explicit method of declaring tasks makes it easier
to set up recursive namespaces without e.g. polluting your task list with the
contents of Python’s
os
module (which would show up as valid “tasks” under the classic methodology.)
除刚刚介绍的 Task
外,还有两种设置新式任务的方式:
- Decorate a regular module level function with
@task
, which transparently wraps the function in aTask
subclass. The function name will be used as the task name when invoking. - Subclass
Task
(Task
itself is intended to be abstract), define arun
method, and instantiate your subclass at module level. Instances’name
attributes are used as the task name; if omitted the instance’s variable name will be used instead.
新式任务还允许设置 namespaces 。
@task
装饰器¶
The quickest way to make use of new-style task features is to wrap basic task functions with @task
:
from fabric.api import task, run
@task
def mytask():
run("a command")
When this decorator is used, it signals to Fabric that only functions wrapped in the decorator are to be loaded up as valid tasks. (When not present, classic-style task behavior kicks in.)
参数¶
@task
may also be called with arguments to
customize its behavior. Any arguments not documented below are passed into the
constructor of the task_class
being used, with the function itself as the
first argument (see 使用 @task 定制子类 for details.)
task_class
: TheTask
subclass used to wrap the decorated function. Defaults toWrappedCallableTask
.aliases
: An iterable of string names which will be used as aliases for the wrapped function. See 别名 for details.alias
: Likealiases
but taking a single string argument instead of an iterable. If bothalias
andaliases
are specified,aliases
will take precedence.default
: A boolean value determining whether the decorated task also stands in for its containing module as a task name. See 默认任务.name
: A string setting the name this task appears as to the command-line interface. Useful for task names that would otherwise shadow Python builtins (which is technically legal but frowned upon and bug-prone.)
别名¶
Here’s a quick example of using the alias
keyword argument to facilitate
use of both a longer human-readable task name, and a shorter name which is
quicker to type:
from fabric.api import task
@task(alias='dwm')
def deploy_with_migrations():
pass
Calling --list
on this fabfile would show both the original
deploy_with_migrations
and its alias dwm
:
$ fab --list
Available commands:
deploy_with_migrations
dwm
When more than one alias for the same function is needed, simply swap in the
aliases
kwarg, which takes an iterable of strings instead of a single
string.
默认任务¶
In a similar manner to aliases, it’s sometimes useful to designate a given task within a module as the “default” task, which may be called by referencing just the module name. This can save typing and/or allow for neater organization when there’s a single “main” task and a number of related tasks or subroutines.
For example, a deploy
submodule might contain tasks for provisioning new
servers, pushing code, migrating databases, and so forth – but it’d be very
convenient to highlight a task as the default “just deploy” action. Such a
deploy.py
module might look like this:
from fabric.api import task
@task
def migrate():
pass
@task
def push():
pass
@task
def provision():
pass
@task
def full_deploy():
if not provisioned:
provision()
push()
migrate()
With the following task list (assuming a simple top level fabfile.py
that just imports deploy
):
$ fab --list
Available commands:
deploy.full_deploy
deploy.migrate
deploy.provision
deploy.push
Calling deploy.full_deploy
on every deploy could get kind of old, or somebody new to the team might not be sure if that’s really the right task to run.
Using the default
kwarg to @task
, we can tag
e.g. full_deploy
as the default task:
@task(default=True)
def full_deploy():
pass
这样之后,将任务列表更新成这样:
$ fab --list
Available commands:
deploy
deploy.full_deploy
deploy.migrate
deploy.provision
deploy.push
Note that full_deploy
still exists as its own explicit task – but now
deploy
shows up as a sort of top level alias for full_deploy
.
If multiple tasks within a module have default=True
set, the last one to
be loaded (typically the one lowest down in the file) will take precedence.
顶层的默认任务¶
Using @task(default=True)
in the top level fabfile will cause the denoted
task to execute when a user invokes fab
without any task names (similar to
e.g. make
.) When using this shortcut, it is not possible to specify
arguments to the task itself – use a regular invocation of the task if this
is necessary.
Task
子类¶
If you’re used to classic-style tasks, an easy way to
think about Task
subclasses is that their run
method is
directly equivalent to a classic task; its arguments are the task arguments
(other than self
) and its body is what gets executed.
例如,新式任务会像这样:
class MyTask(Task):
name = "deploy"
def run(self, environment, domain="whatever.com"):
run("git clone foo")
sudo("service apache2 restart")
instance = MyTask()
和下面这个基于函数的任务作用完全一致:
@task
def deploy(environment, domain="whatever.com"):
run("git clone foo")
sudo("service apache2 restart")
Note how we had to instantiate an instance of our class; that’s simply normal
Python object-oriented programming at work. While it’s a small bit of
boilerplate right now – for example, Fabric doesn’t care about the name you
give the instantiation, only the instance’s name
attribute – it’s well
worth the benefit of having the power of classes available.
We plan to extend the API in the future to make this experience a bit smoother.
使用 @task
定制子类¶
It’s possible to marry custom Task
subclasses with @task
. This may be useful in cases where your core
execution logic doesn’t do anything class/object-specific, but you want to
take advantage of class metaprogramming or similar techniques.
Specifically, any Task
subclass which is designed to take in a
callable as its first constructor argument (as the built-in
WrappedCallableTask
does) may be specified as the
task_class
argument to @task
.
Fabric will automatically instantiate a copy of the given class, passing in the wrapped function as the first argument. All other args/kwargs given to the decorator (besides the “special” arguments documented in 参数) are added afterwards.
Here’s a brief and somewhat contrived example to make this obvious:
from fabric.api import task
from fabric.tasks import Task
class CustomTask(Task):
def __init__(self, func, myarg, *args, **kwargs):
super(CustomTask, self).__init__(*args, **kwargs)
self.func = func
self.myarg = myarg
def run(self, *args, **kwargs):
return self.func(*args, **kwargs)
@task(task_class=CustomTask, myarg='value', alias='at')
def actual_task():
pass
When this fabfile is loaded, a copy of CustomTask
is instantiated, effectively calling:
task_obj = CustomTask(actual_task, myarg='value')
Note how the alias
kwarg is stripped out by the decorator itself and never
reaches the class instantiation; this is identical in function to how
command-line task arguments work.
命名空间(Namespace)¶
With classic tasks, fabfiles were limited to a single,
flat set of task names with no real way to organize them. In Fabric 1.1 and
newer, if you declare tasks the new way (via @task
or your own Task
subclass instances) you may take advantage
of namespacing:
- Any module objects imported into your fabfile will be recursed into, looking for additional task objects.
- Within submodules, you may control which objects are “exported” by using the
standard Python
__all__
module-level variable name (thought they should still be valid new-style task objects.) - These tasks will be given new dotted-notation names based on the modules they came from, similar to Python’s own import syntax.
Let’s build up a fabfile package from simple to complex and see how this works.
基础¶
We start with a single __init__.py
containing a few tasks (the Fabric API
import omitted for brevity):
@task
def deploy():
...
@task
def compress():
...
fab --list
的输出会像这样:
deploy
compress
There’s just one namespace here: the “root” or global namespace. Looks simple now, but in a real-world fabfile with dozens of tasks, it can get difficult to manage.
引用子目录¶
As mentioned above, Fabric will examine any imported module objects for tasks,
regardless of where that module exists on your Python import path. For now we
just want to include our own, “nearby” tasks, so we’ll make a new submodule in
our package for dealing with, say, load balancers – lb.py
:
@task
def add_backend():
...
我们再在 __init__.py
的顶部加上:
import lb
现在 fab --list
会显示:
deploy
compress
lb.add_backend
Again, with only one task in its own submodule, it looks kind of silly, but the benefits should be pretty obvious.
深入了解¶
Namespacing isn’t limited to just one level. Let’s say we had a larger setup
and wanted a namespace for database related tasks, with additional
differentiation inside that. We make a sub-package named db/
and inside it,
a migrations.py
module:
@task
def list():
...
@task
def run():
...
We need to make sure that this module is visible to anybody importing db
,
so we add it to the sub-package’s __init__.py
:
import migrations
As a final step, we import the sub-package into our root-level __init__.py
,
so now its first few lines look like this:
import lb
import db
这样之后文件的树形结构会变成这样:
.
├── __init__.py
├── db
│ ├── __init__.py
│ └── migrations.py
└── lb.py
fab --list
会显示:
deploy
compress
lb.add_backend
db.migrations.list
db.migrations.run
We could also have specified (or imported) tasks directly into
db/__init__.py
, and they would show up as db.<whatever>
as you might
expect.
使用 __all__
加以限制¶
You may limit what Fabric “sees” when it examines imported modules, by using
the Python convention of a module level __all__
variable (a list of
variable names.) If we didn’t want the db.migrations.run
task to show up by
default for some reason, we could add this to the top of db/migrations.py
:
__all__ = ['list']
Note the lack of 'run'
there. You could, if needed, import run
directly
into some other part of the hierarchy, but otherwise it’ll remain hidden.
封装¶
我们已经将 fabfile 库嵌套组织起来并直接引用,但是重要的并不是文件系统层级结构,Fabric 的加载器只关心模块名和引用的时机。
例如,如果我们修改最顶层的 __init__.py
成这样:
import db as database
任务列表会因此改变:
deploy
compress
lb.add_backend
database.migrations.list
database.migrations.run
This applies to any other import – you could import third party modules into your own task hierarchy, or grab a deeply nested module and make it appear near the top level.
嵌套的列表输出¶
As a final note, we’ve been using the default Fabric --list
output during this section – it makes it more obvious what the actual task
names are. However, you can get a more nested or tree-like view by passing
nested
to the --list-format
option:
$ fab --list-format=nested --list
Available commands (remember to call as module.[...].task):
deploy
compress
lb:
add_backend
database:
migrations:
list
run
While it slightly obfuscates the “real” task names, this view provides a handy way of noting the organization of tasks in large namespaces.
传统任务¶
When no new-style Task
-based tasks are found, Fabric will
consider any callable object found in your fabfile, except the following:
- Callables whose name starts with an underscore (
_
). In other words, Python’s usual “private” convention holds true here. - Callables defined within Fabric itself. Fabric’s own functions such as
run
andsudo
will not show up in your task list.
导入¶
Python’s import
statement effectively includes the imported objects in your
module’s namespace. Since Fabric’s fabfiles are just Python modules, this means
that imports are also considered as possible classic-style tasks, alongside
anything defined in the fabfile itself.
注解
This only applies to imported callable objects – not modules. Imported modules only come into play if they contain new-style tasks, at which point this section no longer applies.
Because of this, we strongly recommend that you use the import module
form
of importing, followed by module.callable()
, which will result in a cleaner
fabfile API than doing from module import callable
.
下面是一个使用 urllib.urlopen
从网络服务下载数据的 fabfile 的例子。
from urllib import urlopen
from fabric.api import run
def webservice_read():
objects = urlopen('http://my/web/service/?foo=bar').read().split()
print(objects)
看起来这已经足够简单,并且没有错误。然而,如果在这个 fabfile 上运行 fab --list
就会这样:
$ fab --list
Available commands:
webservice_read List some directories.
urlopen urlopen(url [, data]) -> open file-like object
Our fabfile of only one task is showing two “tasks”, which is bad enough, and
an unsuspecting user might accidentally try to call fab urlopen
, which
probably won’t work very well. Imagine any real-world fabfile, which is likely
to be much more complex, and hopefully you can see how this could get messy
fast.
作为参考,下面是推荐的使用方法:
import urllib
from fabric.api import run
def webservice_read():
objects = urllib.urlopen('http://my/web/service/?foo=bar').read().split()
print(objects)
这只是一个很小的变化,但能大幅改善 fabfile 的使用体验。
API 文档¶
Fabric 维护了两套根据代码中 docstring 自动生成的 API 文档(它们都十分详尽)。
核心 API¶
核心 API 是指构成 Fabric 基础构建块的函数、类和方法(例如 run
和 sudo
)。而其他部分(下文的“扩展 API”和用户的 fabfile)都是在这些核心 API 的基础之上构建的。
提供彩色输出的函数¶
0.9.2 新版功能.
封装字符串,提供 ANSI 色彩输出的函数。
本模块中的所有函数均返回包裹对应色彩 ANSI 字符的 text
字符串。
例如,在支持 ANSI 的终端中打印绿色文字:
from fabric.colors import green
print(green("This text is green!"))
这些函数返回值都是修改后的字符串,因此你也可以嵌套使用它们:
from fabric.colors import red, green
print(red("This sentence is red, except for " + green("these words, which are green") + "."))
如果 bold
值为 True
,字符串将会被 ANSI 粗体标记所包裹,通常根据终端实现上的不同显示黑体或更明亮的颜色。
-
fabric.colors.
blue
(text, bold=False)¶
-
fabric.colors.
cyan
(text, bold=False)¶
-
fabric.colors.
green
(text, bold=False)¶
-
fabric.colors.
magenta
(text, bold=False)¶
-
fabric.colors.
red
(text, bold=False)¶
-
fabric.colors.
white
(text, bold=False)¶
-
fabric.colors.
yellow
(text, bold=False)¶
上下文管理器¶
Context managers for use with the with
statement.
注解
When using Python 2.5, you will need to start your fabfile
with from __future__ import with_statement
in order to make use of
the with
statement (which is a regular, non __future__
feature of
Python 2.6+.)
注解
If you are using multiple directly nested with
statements, it can
be convenient to use multiple context expressions in one single with
statement. Instead of writing:
with cd('/path/to/app'):
with prefix('workon myvenv'):
run('./manage.py syncdb')
run('./manage.py loaddata myfixture')
you can write:
with cd('/path/to/app'), prefix('workon myvenv'):
run('./manage.py syncdb')
run('./manage.py loaddata myfixture')
Note that you need Python 2.7+ for this to work. On Python 2.5 or 2.6, you can do the following:
from contextlib import nested
with nested(cd('/path/to/app'), prefix('workon myvenv')):
...
Finally, note that settings
implements
nested
itself – see its API doc for details.
-
fabric.context_managers.
cd
(path)¶ Context manager that keeps directory state when calling remote operations.
Any calls to
run
,sudo
,get
, orput
within the wrapped block will implicitly have a string similar to"cd <path> && "
prefixed in order to give the sense that there is actually statefulness involved.Because use of
cd
affects all such invocations, any code making use of those operations, such as much of thecontrib
section, will also be affected by use ofcd
.Like the actual ‘cd’ shell builtin,
cd
may be called with relative paths (keep in mind that your default starting directory is your remote user’s$HOME
) and may be nested as well.Below is a “normal” attempt at using the shell ‘cd’, which doesn’t work due to how shell-less SSH connections are implemented – state is not kept between invocations of
run
orsudo
:run('cd /var/www') run('ls')
The above snippet will list the contents of the remote user’s
$HOME
instead of/var/www
. Withcd
, however, it will work as expected:with cd('/var/www'): run('ls') # Turns into "cd /var/www && ls"
Finally, a demonstration (see inline comments) of nesting:
with cd('/var/www'): run('ls') # cd /var/www && ls with cd('website1'): run('ls') # cd /var/www/website1 && ls
注解
This context manager is currently implemented by appending to (and, as always, restoring afterwards) the current value of an environment variable,
env.cwd
. However, this implementation may change in the future, so we do not recommend manually alteringenv.cwd
– only the behavior ofcd
will have any guarantee of backwards compatibility.注解
Space characters will be escaped automatically to make dealing with such directory names easier.
在 1.0 版更改: Applies to
get
andput
in addition to the command-running operations.参见
-
fabric.context_managers.
char_buffered
(*args, **kwds)¶ Force local terminal
pipe
be character, not line, buffered.Only applies on Unix-based systems; on Windows this is a no-op.
-
fabric.context_managers.
hide
(*args, **kwds)¶ Context manager for setting the given output
groups
to False.groups
must be one or more strings naming the output groups defined inoutput
. The given groups will be set to False for the duration of the enclosed block, and restored to their previous value afterwards.For example, to hide the “[hostname] run:” status lines, as well as preventing printout of stdout and stderr, one might use
hide
as follows:def my_task(): with hide('running', 'stdout', 'stderr'): run('ls /var/www')
-
fabric.context_managers.
lcd
(path)¶ Context manager for updating local current working directory.
This context manager is identical to
cd
, except that it changes a different env var (lcwd
, instead ofcwd
) and thus only affects the invocation oflocal
and the local arguments toget
/put
.Relative path arguments are relative to the local user’s current working directory, which will vary depending on where Fabric (or Fabric-using code) was invoked. You can check what this is with os.getcwd. It may be useful to pin things relative to the location of the fabfile in use, which may be found in env.real_fabfile
1.0 新版功能.
-
fabric.context_managers.
path
(path, behavior='append')¶ Append the given
path
to the PATH used to execute any wrapped commands.Any calls to
run
orsudo
within the wrapped block will implicitly have a string similar to"PATH=$PATH:<path> "
prepended before the given command.You may customize the behavior of
path
by specifying the optionalbehavior
keyword argument, as follows:'append'
: append given path to the current$PATH
, e.g.PATH=$PATH:<path>
. This is the default behavior.'prepend'
: prepend given path to the current$PATH
, e.g.PATH=<path>:$PATH
.'replace'
: ignore previous value of$PATH
altogether, e.g.PATH=<path>
.
注解
This context manager is currently implemented by modifying (and, as always, restoring afterwards) the current value of environment variables,
env.path
andenv.path_behavior
. However, this implementation may change in the future, so we do not recommend manually altering them directly.1.0 新版功能.
-
fabric.context_managers.
prefix
(command)¶ Prefix all wrapped
run
/sudo
commands with given command plus&&
.This is nearly identical to
cd
, except that nested invocations append to a list of command strings instead of modifying a single string.Most of the time, you’ll want to be using this alongside a shell script which alters shell state, such as ones which export or alter shell environment variables.
For example, one of the most common uses of this tool is with the
workon
command from virtualenvwrapper:with prefix('workon myvenv'): run('./manage.py syncdb')
In the above snippet, the actual shell command run would be this:
$ workon myvenv && ./manage.py syncdb
This context manager is compatible with
cd
, so if your virtualenv doesn’tcd
in itspostactivate
script, you could do the following:with cd('/path/to/app'): with prefix('workon myvenv'): run('./manage.py syncdb') run('./manage.py loaddata myfixture')
Which would result in executions like so:
$ cd /path/to/app && workon myvenv && ./manage.py syncdb $ cd /path/to/app && workon myvenv && ./manage.py loaddata myfixture
Finally, as alluded to near the beginning,
prefix
may be nested if desired, e.g.:with prefix('workon myenv'): run('ls') with prefix('source /some/script'): run('touch a_file')
The result:
$ workon myenv && ls $ workon myenv && source /some/script && touch a_file
Contrived, but hopefully illustrative.
-
fabric.context_managers.
quiet
()¶ Alias to
settings(hide('everything'), warn_only=True)
.Useful for wrapping remote interrogative commands which you expect to fail occasionally, and/or which you want to silence.
Example:
with quiet(): have_build_dir = run("test -e /tmp/build").succeeded
When used in a task, the above snippet will not produce any
run: test -e /tmp/build
line, nor will any stdout/stderr display, and command failure is ignored.参见
1.5 新版功能.
-
fabric.context_managers.
remote_tunnel
(*args, **kwds)¶ Create a tunnel forwarding a locally-visible port to the remote target.
For example, you can let the remote host access a database that is installed on the client host:
# Map localhost:6379 on the server to localhost:6379 on the client, # so that the remote 'redis-cli' program ends up speaking to the local # redis-server. with remote_tunnel(6379): run("redis-cli -i")
The database might be installed on a client only reachable from the client host (as opposed to on the client itself):
# Map localhost:6379 on the server to redis.internal:6379 on the client with remote_tunnel(6379, local_host="redis.internal") run("redis-cli -i")
remote_tunnel
accepts up to four arguments:remote_port
(mandatory) is the remote port to listen to.local_port
(optional) is the local port to connect to; the default is the same port as the remote one.local_host
(optional) is the locally-reachable computer (DNS name or IP address) to connect to; the default islocalhost
(that is, the same computer Fabric is running on).remote_bind_address
(optional) is the remote IP address to bind to for listening, on the current target. It should be an IP address assigned to an interface on the target (or a DNS name that resolves to such IP). You can use “0.0.0.0” to bind to all interfaces.
注解
By default, most SSH servers only allow remote tunnels to listen to the localhost interface (127.0.0.1). In these cases,
remote_bind_address
is ignored by the server, and the tunnel will listen only to 127.0.0.1.
-
fabric.context_managers.
settings
(*args, **kwargs)¶ Nest context managers and/or override
env
variables.settings
serves two purposes:Most usefully, it allows temporary overriding/updating of
env
with any provided keyword arguments, e.g.with settings(user='foo'):
. Original values, if any, will be restored once thewith
block closes.- The keyword argument
clean_revert
has special meaning forsettings
itself (see below) and will be stripped out before execution.
- The keyword argument
In addition, it will use contextlib.nested to nest any given non-keyword arguments, which should be other context managers, e.g.
with settings(hide('stderr'), show('stdout')):
.
These behaviors may be specified at the same time if desired. An example will hopefully illustrate why this is considered useful:
def my_task(): with settings( hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True ): if run('ls /etc/lsb-release'): return 'Ubuntu' elif run('ls /etc/redhat-release'): return 'RedHat'
The above task executes a
run
statement, but will warn instead of aborting if thels
fails, and all output – including the warning itself – is prevented from printing to the user. The end result, in this scenario, is a completely silent task that allows the caller to figure out what type of system the remote host is, without incurring the handful of output that would normally occur.Thus,
settings
may be used to set any combination of environment variables in tandem with hiding (or showing) specific levels of output, or in tandem with any other piece of Fabric functionality implemented as a context manager.If
clean_revert
is set toTrue
,settings
will not revert keys which are altered within the nested block, instead only reverting keys whose values remain the same as those given. More examples will make this clear; below is howsettings
operates normally:# Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost'): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string is None again
The internal modification of
env.host_string
is nullified – not always desirable. That’s whereclean_revert
comes in:# Before the block, env.parallel defaults to False, host_string to None with settings(parallel=True, host_string='myhost', clean_revert=True): # env.parallel is True # env.host_string is 'myhost' env.host_string = 'otherhost' # env.host_string is now 'otherhost' # Outside the block: # * env.parallel is False again # * env.host_string remains 'otherhost'
Brand new keys which did not exist in
env
prior to usingsettings
are also preserved ifclean_revert
is active. WhenFalse
, such keys are removed when the block exits.1.4.1 新版功能: The
clean_revert
kwarg.
-
fabric.context_managers.
shell_env
(**kw)¶ Set shell environment variables for wrapped commands.
For example, the below shows how you might set a ZeroMQ related environment variable when installing a Python ZMQ library:
with shell_env(ZMQ_DIR='/home/user/local'): run('pip install pyzmq')
As with
prefix
, this effectively turns therun
command into:$ export ZMQ_DIR='/home/user/local' && pip install pyzmq
Multiple key-value pairs may be given simultaneously.
注解
If used to affect the behavior of
local
when running from a Windows localhost,SET
commands will be used to implement this feature.
-
fabric.context_managers.
show
(*args, **kwds)¶ Context manager for setting the given output
groups
to True.groups
must be one or more strings naming the output groups defined inoutput
. The given groups will be set to True for the duration of the enclosed block, and restored to their previous value afterwards.For example, to turn on debug output (which is typically off by default):
def my_task(): with show('debug'): run('ls /var/www')
As almost all output groups are displayed by default,
show
is most useful for turning on the normally-hiddendebug
group, or when you know or suspect that code calling your own code is trying to hide output withhide
.
-
fabric.context_managers.
warn_only
()¶ Alias to
settings(warn_only=True)
.参见
装饰器¶
fabfile 中可以方便使用的装饰器。
-
fabric.decorators.
hosts
(*host_list)¶ 该装饰器用于指定被装饰的函数执行在那台主机或哪些主机列表上。
例如:如果不在控制台覆盖相关参数的话,将会在
host1
、host2
以及host3
上执行my_func
,并且在host1
和host3
上都指定了登录用户。@hosts('user1@host1', 'host2', 'user2@host3') def my_func(): pass
hosts
接受 host 的参数列表(@hosts('host1')
,@hosts('host1', 'host2')
)或者一个 hosts 可迭代对象(@hosts(['host1', 'host2'])
)。要注意,这个装饰器仅仅会设置函数的
.hosts
属性,which is then read prior to executing the function.在 0.9.2 版更改: 可以接收一个可迭代对象作为唯一参数(
@hosts(iterable)
),不再要求这样写:@hosts(*iterable)
。
-
fabric.decorators.
parallel
(pool_size=None)¶ 强制被装饰的函数并行执行而非同步执行。
该装饰器的优先级高于全局变量 env.parallel。如果函数还装饰了
serial
的话,依旧是它的优先级更高。1.3 新版功能.
-
fabric.decorators.
roles
(*role_list)¶ 该装饰器用于定义(服务器)“角色”名,然后用于寻找对应的主机列表。
角色是定义在
env
中的键,其对应的值是一个或多个主机连接字符穿的列表。例如:不考虑控制台参数覆盖的话,my_func
将会在webserver
和dbserver
角色对应的主机列表上执行:env.roledefs.update({ 'webserver': ['www1', 'www2'], 'dbserver': ['db1'] }) @roles('webserver', 'dbserver') def my_func(): pass
和
hosts
一样,roles
也接受参数列表,或者单个可迭代对象作为参数,其实现机制是设置<function>.roles
,同样类似于hosts
。在 0.9.2 版更改: (和
hosts
一样)支持可迭代对象作为唯一参数。
-
fabric.decorators.
runs_once
(func)¶ 阻止函数多次执行的装饰器。
通过保存内部状态,使用该装饰器可以保证函数在每个 Python 解释器中只运行一次,通常在使用时它的作用都是“每个
fab
程序生命周期中只运行一次”。任何被该装饰器装饰的函数在第二次、第三次……第 n 次执行时都会静默失败,并返回初次运行的结果。
注解
runs_once
无法和任务并行执行同时生效。
-
fabric.decorators.
serial
(func)¶ 强制被装饰的函数顺序执行,不并行执行。
该装饰器效果的优先级高于全局变量 env.parallel。如果任务同时被
serial
和parallel
装饰器装饰,parallel
的优先级更高。1.3 新版功能.
-
fabric.decorators.
task
(*args, **kwargs)¶ 将函数封装为新式任务的装饰器。
可以作为简单的、无参数的装饰器使用(
@task
这样),也可以使用参数修订其行为(比如:@task(alias='myalias')
)。关于 new-style task 装饰器的使用请参见其文档。
在 1.2 版更改: 新增关键字参数
alias
、aliases
、task_class
和default
。详情参见 参数。在 1.5 版更改: 新增关键字参数
name
。参见
~fabric.docs.unwrap_tasks`、
WrappedCallableTask
文档助手¶
-
fabric.docs.
unwrap_tasks
(module, hide_nontasks=False)¶ 将
module
中的任务对象替换为自己封装的函数。具体来说,你可以将
WrappedCallableTask
的实例替换为其.wrapped
属性(原先被封装的函数)。它应该和 Sphinx 文档工具一起使用,使用在项目
conf.py
文件的底部,用于保证文档工具只会接触到“真正”的函数,不包括函数签名之类。通过使用unwrap_tasks
,自动生成文档工具将不会发现文档签名(尽管任然能发现__doc__
等)。例如,你可以在
conf.py
的底部写上:from fabric.docs import unwrap_tasks import my_package.my_fabfile unwrap_tasks(my_package.my_fabfile)
只需要设置
hide_nontasks=True
就可以 隐藏 所有非任务函数,它保证所有这些对象不会被识别为任务,因此会被当作是私有的,Sphinx 自动生成文档时也会将其略过。如果你的 fabfile 中混有子程序(subroutine)和任务,而你 只 希望将任务文档化,
hide_nontasks
对你会非常有用。如果你在 Fabric 代码实际运行环境中使用它(而非 Sphinx
conf.py
中),请立即就医。(原文就是“please seek immediate medical attention”——译者注)参见
网络¶
Classes and subroutines dealing with network connections and related topics.
-
fabric.network.
disconnect_all
()¶ Disconnect from all currently connected servers.
Used at the end of
fab
‘s main loop, and also intended for use by library users.
-
class
fabric.network.
HostConnectionCache
¶ Dict subclass allowing for caching of host connections/clients.
This subclass will intelligently create new client connections when keys are requested, or return previously created connections instead.
It also handles creating new socket-like objects when required to implement gateway connections and
ProxyCommand
, and handing them to the inner connection methods.Key values are the same as host specifiers throughout Fabric: optional username +
@
, mandatory hostname, optional:
+ port number. Examples:example.com
- typical Internet host address.firewall
- atypical, but still legal, local host address.user@example.com
- with specific username attached.bob@smith.org:222
- with specific nonstandard port attached.
When the username is not given,
env.user
is used.env.user
defaults to the currently running user at startup but may be overwritten by user code or by specifying a command-line flag.Note that differing explicit usernames for the same hostname will result in multiple client connections being made. For example, specifying
user1@example.com
will create a connection toexample.com
, logged in asuser1
; later specifyinguser2@example.com
will create a new, 2nd connection asuser2
.The same applies to ports: specifying two different ports will result in two different connections to the same host being made. If no port is given, 22 is assumed, so
example.com
is equivalent toexample.com:22
.-
__getitem__
(key)¶ Autoconnect + return connection object
-
__weakref__
¶ list of weak references to the object (if defined)
-
connect
(key)¶ Force a new connection to
key
host string.
-
fabric.network.
connect
(user, host, port, cache, seek_gateway=True)¶ Create and return a new SSHClient instance connected to given host.
参数: - user – Username to connect as.
- host – Network hostname.
- port – SSH daemon port.
- cache – A
HostConnectionCache
instance used to cache/store gateway hosts when gatewaying is enabled. - seek_gateway – Whether to try setting up a gateway socket for this connection. Used so the actual gateway connection can prevent recursion.
-
fabric.network.
denormalize
(host_string)¶ Strips out default values for the given host string.
If the user part is the default user, it is removed; if the port is port 22, it also is removed.
-
fabric.network.
disconnect_all
() Disconnect from all currently connected servers.
Used at the end of
fab
‘s main loop, and also intended for use by library users.
-
fabric.network.
get_gateway
(host, port, cache, replace=False)¶ Create and return a gateway socket, if one is needed.
This function checks
env
for gateway or proxy-command settings and returns the necessary socket-like object for use by a final host connection.参数: - host – Hostname of target server.
- port – Port to connect to on target server.
- cache – A
HostConnectionCache
object, in which gatewaySSHClient
objects are to be retrieved/cached. - replace – Whether to forcibly replace a cached gateway client object.
返回: A
socket.socket
-like object, orNone
if none was created.
-
fabric.network.
join_host_strings
(user, host, port=None)¶ Turns user/host/port strings into
user@host:port
combined string.This function is not responsible for handling missing user/port strings; for that, see the
normalize
function.If
host
looks like IPv6 address, it will be enclosed in square bracketsIf
port
is omitted, the returned string will be of the formuser@host
.
-
fabric.network.
key_filenames
()¶ Returns list of SSH key filenames for the current env.host_string.
Takes into account ssh_config and env.key_filename, including normalization to a list. Also performs
os.path.expanduser
expansion on any key filenames.
-
fabric.network.
key_from_env
(passphrase=None)¶ Returns a paramiko-ready key from a text string of a private key
-
fabric.network.
needs_host
(func)¶ Prompt user for value of
env.host_string
whenenv.host_string
is empty.This decorator is basically a safety net for silly users who forgot to specify the host/host list in one way or another. It should be used to wrap operations which require a network connection.
Due to how we execute commands per-host in
main()
, it’s not possible to specify multiple hosts at this point in time, so only a single host will be prompted for.Because this decorator sets
env.host_string
, it will prompt once (and only once) per command. Asmain()
clearsenv.host_string
between commands, this decorator will also end up prompting the user once per command (in the case where multiple commands have no hosts set, of course.)
-
fabric.network.
normalize
(host_string, omit_port=False)¶ Normalizes a given host string, returning explicit host, user, port.
If
omit_port
is given and is True, only the host and user are returned.This function will process SSH config files if Fabric is configured to do so, and will use them to fill in some default values or swap in hostname aliases.
-
fabric.network.
normalize_to_string
(host_string)¶ normalize() returns a tuple; this returns another valid host string.
-
fabric.network.
prompt_for_password
(prompt=None, no_colon=False, stream=None)¶ Prompts for and returns a new password if required; otherwise, returns None.
A trailing colon is appended unless
no_colon
is True.If the user supplies an empty password, the user will be re-prompted until they enter a non-empty password.
prompt_for_password
autogenerates the user prompt based on the current host being connected to. To override this, specify a string value forprompt
.stream
is the stream the prompt will be printed to; if not given, defaults tosys.stderr
.
-
fabric.network.
ssh_config
(host_string=None)¶ Return ssh configuration dict for current env.host_string host value.
Memoizes the loaded SSH config file, but not the specific per-host results.
This function performs the necessary “is SSH config enabled?” checks and will simply return an empty dict if not. If SSH config is enabled and the value of env.ssh_config_path is not a valid file, it will abort.
May give an explicit host string as
host_string
.
业务(Operation)¶
应当在 fabfile 或者其他非核心代码中运行的函数,例如 run()/sudo()。
-
fabric.operations.
get
(*args, **kwargs)¶ 从远程主机下载一个或多个文件。
get
returns an iterable containing the absolute paths to all local files downloaded, which will be empty iflocal_path
was a StringIO object (see below for more on using StringIO). This object will also exhibit a.failed
attribute containing any remote file paths which failed to download, and a.succeeded
attribute equivalent tonot .failed
.remote_path
is the remote file or directory path to download, which may contain shell glob syntax, e.g."/var/log/apache2/*.log"
, and will have tildes replaced by the remote home directory. Relative paths will be considered relative to the remote user’s home directory, or the current remote working directory as manipulated bycd
. If the remote path points to a directory, that directory will be downloaded recursively.local_path
is the local file path where the downloaded file or files will be stored. If relative, it will honor the local current working directory as manipulated bylcd
. It may be interpolated, using standard Python dict-based interpolation, with the following variables:host
: The value ofenv.host_string
, egmyhostname
oruser@myhostname-222
(the colon between hostname and port is turned into a dash to maximize filesystem compatibility)dirname
: The directory part of the remote file path, e.g. thesrc/projectname
insrc/projectname/utils.py
.basename
: The filename part of the remote file path, e.g. theutils.py
insrc/projectname/utils.py
path
:远程路径完整地址,例如:src/projectname/utils.py
。
While the SFTP protocol (which
get
uses) has no direct ability to download files from locations not owned by the connecting user, you may specifyuse_sudo=True
to work around this. When set, this setting allowsget
to copy (using sudo) the remote files to a temporary location on the remote end (defaults to remote user’s$HOME
; this may be overridden viatemp_dir
), and then download them tolocal_path
.注解
When
remote_path
is an absolute directory path, only the inner directories will be recreated locally and passed into the above variables. So for example,get('/var/log', '%(path)s')
would start writing out files likeapache2/access.log
,postgresql/8.4/postgresql.log
, etc, in the local working directory. It would not write out e.g.var/log/apache2/access.log
.Additionally, when downloading a single file,
%(dirname)s
and%(path)s
do not make as much sense and will be empty and equivalent to%(basename)s
, respectively. Thus a call likeget('/var/log/apache2/access.log', '%(path)s')
will save a local file namedaccess.log
, notvar/log/apache2/access.log
.这是为了与命令行程序
scp
保持一致。If left blank,
local_path
defaults to"%(host)s/%(path)s"
in order to be safe for multi-host invocations.警告
If your
local_path
argument does not contain%(host)s
and yourget
call runs against multiple hosts, your local files will be overwritten on each successive run!If
local_path
does not make use of the above variables (i.e. if it is a simple, explicit file path) it will act similar toscp
orcp
, overwriting pre-existing files if necessary, downloading into a directory if given (e.g.get('/path/to/remote_file.txt', 'local_directory')
will createlocal_directory/remote_file.txt
) and so forth.local_path
may alternately be a file-like object, such as the result ofopen('path', 'w')
or aStringIO
instance.注解
Attempting to
get
a directory into a file-like object is not valid and will result in an error.注解
This function will use
seek
andtell
to overwrite the entire contents of the file-like object, in order to be consistent with the behavior ofput
(which also considers the entire file). However, unlikeput
, the file pointer will not be restored to its previous location, as that doesn’t make as much sense here and/or may not even be possible.注解
If a file-like object such as StringIO has a
name
attribute, that will be used in Fabric’s printed output instead of the default<file obj>
在 1.0 版更改: Now honors the remote working directory as manipulated by
cd
, and the local working directory as manipulated bylcd
.在 1.0 版更改: Now allows file-like objects in the
local_path
argument.在 1.0 版更改:
local_path
may now contain interpolated path- and host-related variables.在 1.0 版更改: Directories may be specified in the
remote_path
argument and will trigger recursive downloads.在 1.0 版更改: Return value is now an iterable of downloaded local file paths, which also exhibits the
.failed
and.succeeded
attributes.在 1.5 版更改: Allow a
name
attribute on file-like objects for log output
-
fabric.operations.
local
(command, capture=False, shell=None)¶ Run a command on the local system.
local
is simply a convenience wrapper around the use of the builtin Pythonsubprocess
module withshell=True
activated. If you need to do anything special, consider using thesubprocess
module directly.shell
is passed directly to subprocess.Popen‘sexecute
argument (which determines the local shell to use.) As per the linked documentation, on Unix the default behavior is to use/bin/sh
, so this option is useful for setting that value to e.g./bin/bash
.local
is not currently capable of simultaneously printing and capturing output, asrun
/sudo
do. Thecapture
kwarg allows you to switch between printing and capturing as necessary, and defaults toFalse
.When
capture=False
, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controlsoutput.stdout
andoutput.stderr
to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.When
capture=True
, you will not see any output from the subprocess in your terminal, but the return value will contain the captured stdout/stderr.In either case, as with
run
andsudo
, this return value exhibits thereturn_code
,stderr
,failed
,succeeded
,command
andreal_command
attributes. Seerun
for details.local
will honor thelcd
context manager, allowing you to control its current working directory independently of the remote end (which honorscd
).在 1.0 版更改: Added the
succeeded
andstderr
attributes.在 1.0 版更改: Now honors the
lcd
context manager.在 1.0 版更改: Changed the default value of
capture
fromTrue
toFalse
.1.9 新版功能: The return value attributes
.command
and.real_command
.
-
fabric.operations.
open_shell
(*args, **kwargs)¶ Invoke a fully interactive shell on the remote end.
If
command
is given, it will be sent down the pipe before handing control over to the invoking user.This function is most useful for when you need to interact with a heavily shell-based command or series of commands, such as when debugging or when fully interactive recovery is required upon remote program failure.
It should be considered an easy way to work an interactive shell session into the middle of a Fabric script and is not a drop-in replacement for
run
, which is also capable of interacting with the remote end (albeit only while its given command is executing) and has much stronger programmatic abilities such as error handling and stdout/stderr capture.Specifically,
open_shell
provides a better interactive experience thanrun
, but use of a full remote shell prevents Fabric from determining whether programs run within the shell have failed, and pollutes the stdout/stderr stream with shell output such as login banners, prompts and echoed stdin.Thus, this function does not have a return value and will not trigger Fabric’s failure handling if any remote programs result in errors.
1.0 新版功能.
-
fabric.operations.
prompt
(text, key=None, default='', validate=None)¶ Prompt user with
text
and return the input (likeraw_input
).A single space character will be appended for convenience, but nothing else. Thus, you may want to end your prompt text with a question mark or a colon, e.g.
prompt("What hostname?")
.If
key
is given, the user’s input will be stored asenv.<key>
in addition to being returned byprompt
. If the key already existed inenv
, its value will be overwritten and a warning printed to the user.If
default
is given, it is displayed in square brackets and used if the user enters nothing (i.e. presses Enter without entering any text).default
defaults to the empty string. If non-empty, a space will be appended, so that a call such asprompt("What hostname?", default="foo")
would result in a prompt ofWhat hostname? [foo]
(with a trailing space after the[foo]
.)The optional keyword argument
validate
may be a callable or a string:- If a callable, it is called with the user’s input, and should return the value to be stored on success. On failure, it should raise an exception with an exception message, which will be printed to the user.
- If a string, the value passed to
validate
is used as a regular expression. It is thus recommended to use raw strings in this case. Note that the regular expression, if it is not fully matching (bounded by^
and$
) it will be made so. In other words, the input must fully match the regex.
Either way,
prompt
will re-prompt until validation passes (or the user hitsCtrl-C
).注解
prompt
honors env.abort_on_prompts and will callabort
instead of prompting if that flag is set toTrue
. If you want to block on user input regardless, try wrapping withsettings
.Examples:
# Simplest form: environment = prompt('Please specify target environment: ') # With default, and storing as env.dish: prompt('Specify favorite dish: ', 'dish', default='spam & eggs') # With validation, i.e. requiring integer input: prompt('Please specify process nice level: ', key='nice', validate=int) # With validation against a regular expression: release = prompt('Please supply a release name', validate=r'^\w+-\d+(\.\d+)?$') # Prompt regardless of the global abort-on-prompts setting: with settings(abort_on_prompts=False): prompt('I seriously need an answer on this! ')
-
fabric.operations.
put
(*args, **kwargs)¶ Upload one or more files to a remote host.
put
returns an iterable containing the absolute file paths of all remote files uploaded. This iterable also exhibits a.failed
attribute containing any local file paths which failed to upload (and may thus be used as a boolean test.) You may also check.succeeded
which is equivalent tonot .failed
.local_path
may be a relative or absolute local file or directory path, and may contain shell-style wildcards, as understood by the Pythonglob
module (giveuse_glob=False
to disable this behavior). Tilde expansion (as implemented byos.path.expanduser
) is also performed.local_path
may alternately be a file-like object, such as the result ofopen('path')
or aStringIO
instance.注解
In this case,
put
will attempt to read the entire contents of the file-like object by rewinding it usingseek
(and will usetell
afterwards to preserve the previous file position).remote_path
may also be a relative or absolute location, but applied to the remote host. Relative paths are relative to the remote user’s home directory, but tilde expansion (e.g.~/.ssh/
) will also be performed if necessary.An empty string, in either path argument, will be replaced by the appropriate end’s current working directory.
While the SFTP protocol (which
put
uses) has no direct ability to upload files to locations not owned by the connecting user, you may specifyuse_sudo=True
to work around this. When set, this setting causesput
to upload the local files to a temporary location on the remote end (defaults to remote user’s$HOME
; this may be overridden viatemp_dir
), and then usesudo
to move them toremote_path
.In some use cases, it is desirable to force a newly uploaded file to match the mode of its local counterpart (such as when uploading executable scripts). To do this, specify
mirror_local_mode=True
.Alternately, you may use the
mode
kwarg to specify an exact mode, in the same vein asos.chmod
or the Unixchmod
command.put
will honorcd
, so relative values inremote_path
will be prepended by the current remote working directory, if applicable. Thus, for example, the below snippet would attempt to upload to/tmp/files/test.txt
instead of~/files/test.txt
:with cd('/tmp'): put('/path/to/local/test.txt', 'files')
Use of
lcd
will affectlocal_path
in the same manner.Examples:
put('bin/project.zip', '/tmp/project.zip') put('*.py', 'cgi-bin/') put('index.html', 'index.html', mode=0755)
注解
If a file-like object such as StringIO has a
name
attribute, that will be used in Fabric’s printed output instead of the default<file obj>
在 1.0 版更改: Now honors the remote working directory as manipulated by
cd
, and the local working directory as manipulated bylcd
.在 1.0 版更改: Now allows file-like objects in the
local_path
argument.在 1.0 版更改: Directories may be specified in the
local_path
argument and will trigger recursive uploads.在 1.0 版更改: Return value is now an iterable of uploaded remote file paths which also exhibits the
.failed
and.succeeded
attributes.在 1.5 版更改: Allow a
name
attribute on file-like objects for log output在 1.7 版更改: Added
use_glob
option to allow disabling of globbing.
-
fabric.operations.
reboot
(*args, **kwargs)¶ Reboot the remote system.
Will temporarily tweak Fabric’s reconnection settings (timeout and connection_attempts) to ensure that reconnection does not give up for at least
wait
seconds.注解
As of Fabric 1.4, the ability to reconnect partway through a session no longer requires use of internal APIs. While we are not officially deprecating this function, adding more features to it will not be a priority.
Users who want greater control are encouraged to check out this function’s (6 lines long, well commented) source code and write their own adaptation using different timeout/attempt values or additional logic.
0.9.2 新版功能.
在 1.4 版更改: Changed the
wait
kwarg to be optional, and refactored to leverage the new reconnection functionality; it may not actually have to wait forwait
seconds before reconnecting.
-
fabric.operations.
require
(*keys, **kwargs)¶ Check for given keys in the shared environment dict and abort if not found.
Positional arguments should be strings signifying what env vars should be checked for. If any of the given arguments do not exist, Fabric will abort execution and print the names of the missing keys.
The optional keyword argument
used_for
may be a string, which will be printed in the error output to inform users why this requirement is in place.used_for
is printed as part of a string similar to:"Th(is|ese) variable(s) (are|is) used for %s"
so format it appropriately.
The optional keyword argument
provided_by
may be a list of functions or function names or a single function or function name which the user should be able to execute in order to set the key or keys; it will be included in the error output if requirements are not met.Note: it is assumed that the keyword arguments apply to all given keys as a group. If you feel the need to specify more than one
used_for
, for example, you should break your logic into multiple calls torequire()
.在 1.1 版更改: 支持可迭代变量
provided_by
而不再仅仅是单个值。
-
fabric.operations.
run
(*args, **kwargs)¶ 在远程主机上执行 shell 命令。
If
shell
is True (the default),run
will execute the given command string via a shell interpreter, the value of which may be controlled by settingenv.shell
(defaulting to something similar to/bin/bash -l -c "<command>"
.) Any double-quote ("
) or dollar-sign ($
) characters incommand
will be automatically escaped whenshell
is True.run
will return the result of the remote program’s stdout as a single (likely multiline) string. This string will exhibitfailed
andsucceeded
boolean attributes specifying whether the command failed or succeeded, and will also include the return code as thereturn_code
attribute. Furthermore, it includes a copy of the requested & actual command strings executed, as.command
and.real_command
, respectively.Any text entered in your local terminal will be forwarded to the remote program as it runs, thus allowing you to interact with password or other prompts naturally. For more on how this works, see 与远程程序集成.
You may pass
pty=False
to forego creation of a pseudo-terminal on the remote end in case the presence of one causes problems for the command in question. However, this will force Fabric itself to echo any and all input you type while the command is running, including sensitive passwords. (Withpty=True
, the remote pseudo-terminal will echo for you, and will intelligently handle password-style prompts.) See 伪终端 for details.Similarly, if you need to programmatically examine the stderr stream of the remote program (exhibited as the
stderr
attribute on this function’s return value), you may setcombine_stderr=False
. Doing so has a high chance of causing garbled output to appear on your terminal (though the resulting strings returned byrun
will be properly separated). For more info, please read 合并 stdout 和 stderr.To ignore non-zero return codes, specify
warn_only=True
. To both ignore non-zero return codes and force a command to run silently, specifyquiet=True
.To override which local streams are used to display remote stdout and/or stderr, specify
stdout
orstderr
. (By default, the regularsys.stdout
andsys.stderr
Python stream objects are used.)For example,
run("command", stderr=sys.stdout)
would print the remote standard error to the local standard out, while preserving it as its own distinct attribute on the return value (as per above.) Alternately, you could even provide your own stream objects or loggers, e.g.myout = StringIO(); run("command", stdout=myout)
.If you want an exception raised when the remote program takes too long to run, specify
timeout=N
whereN
is an integer number of seconds, after which to time out. This will causerun
to raise aCommandTimeout
exception.If you want to disable Fabric’s automatic attempts at escaping quotes, dollar signs etc., specify
shell_escape=False
.Examples:
run("ls /var/www/") run("ls /home/myuser", shell=False) output = run('ls /var/www/site1') run("take_a_long_time", timeout=5)
1.0 新版功能: The
succeeded
andstderr
return value attributes, thecombine_stderr
kwarg, and interactive behavior.在 1.0 版更改:
pty
默认值现在为True
。在 1.0.2 版更改: The default value of
combine_stderr
is nowNone
instead ofTrue
. However, the default behavior is unchanged, as the global setting is stillTrue
.1.5 新版功能: 关键字参数
quiet
、warn_only
、stdout
以及stderr
。1.5 新版功能: The return value attributes
.command
and.real_command
.1.6 新版功能:
timeout
参数。1.7 新版功能:
shell_escape
参数。
-
fabric.operations.
sudo
(*args, **kwargs)¶ 在远程主机上使用超级用户权限执行 shell 命令。
sudo
is identical in every way torun
, except that it will always wrap the givencommand
in a call to thesudo
program to provide superuser privileges.sudo
accepts additionaluser
andgroup
arguments, which are passed tosudo
and allow you to run as some user and/or group other than root. On most systems, thesudo
program can take a string username/group or an integer userid/groupid (uid/gid);user
andgroup
may likewise be strings or integers.You may set env.sudo_user at module level or via
settings
if you want multiplesudo
calls to have the sameuser
value. An explicituser
argument will, of course, override this global setting.Examples:
sudo("~/install_script.py") sudo("mkdir /var/www/new_docroot", user="www-data") sudo("ls /home/jdoe", user=1001) result = sudo("ls /tmp/") with settings(sudo_user='mysql'): sudo("whoami") # prints 'mysql'
在 1.0 版更改: 参见
run
的变更。在 1.5 版更改: Now honors env.sudo_user.
1.5 新版功能: 关键字参数
quiet
、warn_only
、stdout
以及stderr
。1.5 新版功能: The return value attributes
.command
and.real_command
.1.7 新版功能:
shell_escape
参数。
任务¶
-
class
fabric.tasks.
Task
(alias=None, aliases=None, default=False, name=None, *args, **kwargs)¶ Fabric 任务对象的抽象基类。
执行 fab 命令时,会将 fabfile 中其所有子类的实例当作合法任务。
其具体实现以及
Task
子类的使用参见 新式任务 的文档。1.1 新版功能.
-
__weakref__
¶ 该对象弱引用的列表(如果有定义的话)
-
get_hosts_and_effective_roles
(arg_hosts, arg_roles, arg_exclude_hosts, env=None)¶ 返回一个包含了当前任务将于使用到的主机列表以及将要使用到的角色的远祖。
如何设置主机列表的详细文档参见 How host lists are constructed 。
在 1.9 版更改.
-
-
class
fabric.tasks.
WrappedCallableTask
(callable, *args, **kwargs)¶ 透明地封装一个 callable 对象为合法任务。
通常通过
task
调用而不直接使用。1.1 新版功能.
参见
-
fabric.tasks.
execute
(task, *args, **kwargs)¶ 执行
task
(可调用对象或其名字),与 host/role 装饰器保持一致。task
可以是一个可调用对象,或者是一个注册任务的名字,只要命令行中提供了该名字(包括 命名任务 ,例如"deploy.migrate"
),就会自动寻找对应的 callable 对象。主机列表中的每一个主机上都会执行一次任务,其执行方式(也)和 CLI 任务一样:优先顺序是
-H
、env.hosts、hosts
到roles
装饰器,依此类推。关键字参数
host
、hosts
、role
、roles
以及exclude_hosts
会在最后调用前整理清楚,用于生成任务真正的主机列表, 最终像是在命令行执行:fab taskname:host=hostname
一样。其它参数或者关键字参数将会一字不差地传递给
task``(函数,而非封装在你的函数周围的 ``@task
装饰器),execute(mytask, 'arg1', kwarg1='value')
会(在每台主机上)这样都调用mytask('arg1', kwarg1='value')
。返回: 一个主机字符串对应任务运行结果的字典。例如:主机 a
什么都不返回,主机b
返回'bar'
时,execute(foo, hosts=['a', 'b'])
的结果会是{'a': None, 'b': 'bar'}
。在一个主机执行失败,但并没有退出整个程序(如:env.skip_bad_hosts 设置为 True)时,该主机的返回值将是一个 error 对象或者信息。参见
execute 使用文档 中提供了更详细的解释和示例。
1.3 新版功能.
在 1.4 版更改: 新增了返回值映射,之前的版本中没有返回值。
-
fabric.tasks.
requires_parallel
(task)¶ 如果
task
需要在并行模式下运行则返回True
。具体来说:
显式地添加了
@parallel
装饰器,或者:全局并行参数(
env.parallel
)为Ture
时 非 显式地添加@serial
装饰器。
实用工具¶
内部的实用小程序。;例如:遇到错误信息后终止执行,或者对多行输出进行缩进处理。
-
fabric.utils.
abort
(msg)¶ 终止执行,向 stderr 输入错误信息
msg
并退出(错误状态 1)。This function currently makes use of SystemExit in a manner that is similar to sys.exit (but which skips the automatic printing to stderr, allowing us to more tightly control it via settings).
Therefore, it’s possible to detect and recover from inner calls to
abort
by usingexcept SystemExit
or similar.
-
fabric.utils.
error
(message, func=None, exception=None, stdout=None, stderr=None)¶ 给定错误信息
message
以调用func
。如果
func
为None
,将根据env.warn_only
来调用abort
还是warn
。如果
exception`
参数(应当是字符串类型)存在,将会在用户传入的message
周围输出它。如果指定了
stdin
和/或者stderr
,将作为打印输出的终端。
-
fabric.utils.
fastprint
(text, show_prefix=False, end='', flush=True)¶ 立即打印
text
不添加任何前缀或后缀。该函数只是
puts
的别名,区别在参数的迷默认值不同,text
将会不加任何装饰地输出出去。当你想要输出可能被 Python 输出所缓冲(比如在计算密集的
for
循环中)时会很需要它。Since such use cases typically also require a lack of line endings (such as printing a series of dots to signify progress) it also omits the traditional newline by default.注解
由于
fastprint
会调用puts
,因此其 output level 也取决于user
。0.9.2 新版功能.
参见
-
fabric.utils.
indent
(text, spaces=4, strip=False)¶ 根据给定空格数缩进
text
。如果
text
并非字符串,将被当作单行输出的列表,并使用\n
连接、排列。若
strip
为True
,a minimum amount of whitespace is removed from the left-hand side of the given string (so that relative indents are preserved, but otherwise things are left-stripped). 这样你就能有效地“规范化”某些输出前的缩进。
-
fabric.utils.
puts
(text, show_prefix=None, end='\n', flush=False)¶ print
函数的别名,同样受 Fabric 管理输出。换句话说,这个函数只是简单地将输出指向
sys.stdout` ,如果``user
将 output level 设置为 False 则隐藏输出。如果
show_prefix=False
,puts
将略过默认添加的输出头[hostname]
。(如果env.host_string
为空的话也能起到同样的效果。)设置
end
为空字符串''
将不会在末尾输出换行。()你可以通过设置
flush=True
来强制立即输出(例如绕过输出缓冲)。0.9.2 新版功能.
参见
-
fabric.utils.
warn
(msg)¶ 打印警告信息,但不退出执行。
该函数遵循 Fabric output controls 如果已开启
warnings
等级(默认开启),将会向 stderr 打印msg
。
扩展 API¶
Fabric 的 扩展包 包括常用而有用的工具(通常是从用户的 fabfile 中合并进来的),可用于用户 I/O、修改远程文件等任务中。核心 API 倾向于保持小巧、不随意变更,扩展包则会随着更多的用户案例被解决并添加进来,而不断成长进化(同时尽量保持向后兼容)。
终端输出工具¶
终端用户接口实用功能。
-
fabric.contrib.console.
confirm
(question, default=True)¶ 询问用户 yes/no 的问题,并将用户输入转换为 True 或 False。
question
参数应当简单但合乎语法,比如“是否继续?”,问题的结尾应当接上类似“[Y/n]”这样的字符串,函数本身 并不 会帮你做这种事。默认情况下,用户不输入任何值直接敲击回车相当于输入 “yes”。可以通过指定
default=False
来修改其默认值。
与 Django 集成¶
0.9.2 新版功能.
这些函数提高了初始化 Django 配置中环境变量的效率,运行后即可从 Django 项目或者 Django 本身中提取环境变量,而不需要每次使用 fabfile 时都亲自设置环境变量,或者使用 manage.py
插件。
目前,这些函数仅支持 Fabric 和 fabfile 以及它能引用到的 Django 库交互。听起来限制了你的使用,其实不然。在下面的例子中,你可以像在本地一样使用 Fabric 作为作为“构建”工具:
from fabric.api import run, local, hosts, cd
from fabric.contrib import django
django.project('myproject')
from myproject.myapp.models import MyModel
def print_instances():
for instance in MyModel.objects.all():
print(instance)
@hosts('production-server')
def print_production_instances():
with cd('/path/to/myproject'):
run('fab print_instances')
由于两边都安装了 Fabric,你在本地执行 print_production_instances
将在生产服务器上触发 print_instances
函数,而它将会和远程环境中的 Django 数据库交互。
在下面这个例子中,如果本地和远程使用相同的 settings,那么你可以把数据库等设置放在 fabfile 中,这样在远程(无 Fabric)命令中也能使用。这保证即使只有本地安装了 Fabric 也能灵活地使用:
from fabric.api import run
from fabric.contrib import django
django.settings_module('myproject.settings')
from django.conf import settings
def dump_production_database():
run('mysqldump -u %s -p=%s %s > /tmp/prod-db.sql' % (
settings.DATABASE_USER,
settings.DATABASE_PASSWORD,
settings.DATABASE_NAME
))
上面的代码片段可以在本地开发环境运行,将本地的 settings.py
镜像到远程以共享数据库连接信息。
-
fabric.contrib.django.
project
(name)¶ 将
DJANGO_SETTINGS_MODULE
设置为'<name>.settings'
。该函数使用 Django 自带的 settings 文件或路径命名转换的功能提供了一个简便的常见问题解决方案。
使用
settings_module
—— 详细的使用和原理参见其文档。
-
fabric.contrib.django.
settings_module
(module)¶ 将
DJANGO_SETTINGS_MODULE
控制台环境变量设置为module
。由于 Django 的工作原理所限,从 Django 或者 Django 项目中导入对象必须保证
DJANGO_SETTINGS_MODULE
设置正确(参见 Django 设置文档 )。这个函数提供了一个简易的写法,只需要在 fabfile 或者 Fabric 调用的文件中调用它,之后从 Django 引用对象时便不再有问题。
注解
本函数通过修改
os.environ
来修改 shell 环境变量,并不会对 Fabric 的env
带来任何影响。
文件和目录管理¶
提供远程文件和文件夹操作 API 的简单模块。
-
fabric.contrib.files.
append
(filename, text, use_sudo=False, partial=False, escape=True, shell=False)¶ 添加字符串(列表)
text
至filename
。如果提供的是一个 list,其中的每一个字符串都将将会(按顺序)独立处理。
如果
filename
中已存在text
,将不会执行添加并立刻返回 None,反之则将给定text
添加到filename
的末尾,比如echo '$text' >> $filename
。用于测试
text
是否已经存在,默认是整行匹配,例如:^<text>$
。因为在“向文件的结尾增添一行”的使用情况下,这是最合适的选择。你可以指定partial=True
来覆盖它,并强制部分搜索(比如^<text>
)。由于
text
是包裹在单引号中的,因此会自动将其中的单引号使用反斜线转译,可以使用escape=False
关闭该选项。如果
use_sudo
设置为 True,则使用sudo
而不是run
。shell
参数最终会传递给run/sudo
,具体描述和~fabric.contrib.sed
一样,请查看它的文档详细了解。在 0.9.1 版更改: 新增关键字参数
partial
。在 1.0 版更改: 修改了
filename
和text
参数的顺序与模块中的其它函数保持一致。在 1.0 版更改: 修改关键字参数
partial
的默认值为False
。在 1.4 版更改: 更新了转译相关的正则表达式以修复众多边界情况下的问题。
1.6 新版功能: 新增关键字参数
shell
。
-
fabric.contrib.files.
comment
(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False)¶ 将
filename
文件中所有匹配regex
的行全部注释掉。默认的注释字符是
#
,可以使用char
参数覆盖这项设置。这个函数调用了
sed
函数,因此和sed
一样接受use_sudo
、shell
和backup
等关键字参数。comment
会在行首添加注释符号,函数的结果大概会是这个样子的:this line is uncommented #this line is commented # this line is indented and commented
换句话说,注释操作并不“遵循”手写代码时的缩进规范,注释符后面也不会跟上空白,除非手动指定,比如:
char='# '
。注解
为了保护被注释的代码,这个函数会将
regex
参数包裹在园括号中,并不需要你手动处理。同时会确保开始的^
以及结尾的$
字符会被移除在括号之外。例如:调用comment(filename, r'^foo$')
会产生一个 “before” 参数为r'^(foo)$'``("after" 参数为 ``r'#\1'
)的sed
调用。1.5 新版功能: 新增关键字参数
shell
。
-
fabric.contrib.files.
contains
(filename, text, exact=False, use_sudo=False, escape=True, shell=False)¶ 如果
filename
文件中包含text
(可能是正则表达式)则返回 True。默认情况下,这个函数会对代码行进行部分匹配(例如
text
只包含在某一行文字中的情况),指定exact=True
可以确保只有某一行完全匹配text
才会返回 True。这个函数会影响远程
egrep
的行为(可能无法弯曲符合 Python 正则表达式语法),默认情况下还会忽略env.shell
的封装。如果
use_sudo
设置为 True,则使用sudo
而不是run
。如果
escape
设置为 False,则不会进行任何正则表达式相关的转译(会覆盖exact
自动添加^
/$
的行为)。shell
参数最终会传递给run/sudo
,具体描述和~fabric.contrib.sed
一样,请查看它的文档详细了解。在 1.0 版更改: 修改了
filename
和text
参数的顺序与模块中的其它函数保持一致。在 1.4 版更改: 更新了转译相关的正则表达式以修复众多边界情况下的问题。
在 1.4 版更改: 新增关键字参数
escape
。1.6 新版功能: 新增关键字参数
shell
。
-
fabric.contrib.files.
exists
(path, use_sudo=False, verbose=False)¶ 如果当前远程主机中存在给定的目录则返回 True。
如果
use_sudo
设置为 True,则使用sudo
而不是run
。默认情况下,
exists
会隐藏所有输出(包括 run 那一行、stdout、stderr 以及文件不存在引起的的任何警告)以避免混乱的输出。设置verbose=True
可以修改其行为。
-
fabric.contrib.files.
first
(*args, **kwargs)¶ 返回给定路径中第一个找到文件的那个,如果都找不到则返回 None。
use_sudo
和verbose
参数将会传递给exists
。
-
fabric.contrib.files.
is_link
(path, use_sudo=False, verbose=False)¶ 如果当前远程主机中给定路径是一个软链接则返回 True。
如果
use_sudo
值为真则使用sudo
而非run
。默认情况下,
is_link
会隐藏所有输出,设置verbose=True
可以修改该设置。
-
fabric.contrib.files.
sed
(filename, before, after, limit='', use_sudo=False, backup='.bak', flags='', shell=False)¶ 使用给定正则表达式对
filename
做搜索及替换操作。和
sed -i<backup> -r -e "/<limit>/ s/<before>/<after>/<flags>g" <filename>
等价。设置backup
为空字符串可以阻止备份文件的生成。方便起见,
before
和after
将会自动转译斜线、单引号和圆括号,这样你就可以不必把http://foo\.com
写成http:\/\/foo\.com
。如果
use_sudo
设置为 True,则使用sudo
而不是run
。shell
参数最终会传递给run
/sudo
。其默认值为 False,这样就不会造成很多引号和反斜线相互嵌套的问题。不过,设置为 True 在使用~fabric.operations.cd
隐式或显式地包裹sudo
调用时会很方便。(cd
本质上是基于 shell 的,而非独立的命令,因此需要在 shell 中调用。)其它选项可能是出于兼容 sed 标记的目的 – 例如:设置
flags="i"
可以插入式地搜索和替换。标记g
意味着不停止执行,so you do not need to remember to include it when overriding this parameter.1.1 新版功能:
flags
参数。1.6 新版功能: 新增关键字参数
shell
。
-
fabric.contrib.files.
uncomment
(filename, regex, use_sudo=False, char='#', backup='.bak', shell=False)¶ 将文件
filename
中匹配regex
的所有行取消注释。默认注释界定符是
#
,可以使用char
参数覆盖该设置。这个函数调用了
sed
函数,因此和sed
一样接受use_sudo
、shell
和backup
等关键字参数。uncomment
会删除紧跟在注释字符后面的空格,如果存在的话,并不会影响其之前的空格,例如:# foo
会变成foo
(空格被一起删去了),不过 `` # foo`` 会变成 `` foo`` (只删除了注释字符后的空格,前面的 4 个空格并没有)。在 1.6 版更改: 新增关键字参数
shell
。
-
fabric.contrib.files.
upload_template
(filename, destination, context=None, use_jinja=False, template_dir=None, use_sudo=False, backup=True, mirror_local_mode=False, mode=None, pty=None)¶ 渲染一个模版文本文件,并将结果上传至远程主机。
返回内部
put
调用的结果,详细信息请访问其文档。filename
应当是一个文本文件的地址,可以包含 Python 插入格式 ,并结合上下文字典context
来渲染(如果存在的话)。如果
use_jinja
被设置为 True,同时你已经安装了 Jinja2 模板库,将会使用 Jinja 来渲染该模板。默认会从用户的运行目录寻找模板,除非指定了template_dir
。生成的文件将会上传至远程路径
destination
。如果已有同名文件存在在远程,远程文件将会以.bak
后缀重命名,除非指定backup=False
。默认情况下,将会以登录用户身份复制到
destination
目录,指定use_sudo=True
可以强制使用sudo
复制。关键字参数
mirror_local_mode
和mode
直接用于内部put
的调用,详细操作请参阅其文档。关键字参数
pty
将会被应用到所有run
/sudo
内部调用,例如用于文件路径测试、设置备份等等。在 1.1 版更改: 新增关键字参数
backup
、mirror_local_mode
以及mode
。在 1.9 版更改: 新增
pty
关键字参数。
项目工具¶
其它非核心有用功能,例如:组合多个业务。
-
fabric.contrib.project.
rsync_project
(*args, **kwargs)¶ 使用 rsync 讲远程路径和本地项目同步。
upload_project()
使用scp
来复制整个项目,rsync_project()
会使用rsync
命令,只会将本地比远程更新的文件同步过去。rsync_project()
只是一个简单的rsync
封装,关于rsync
是如何工作的,请阅读它自身的使用手册。为了保证工作正常,你需要保证本地和远程系统中都已安装rsync
。这个函数会调用 Fabric
local()
操作,并将其操作的输出返回回来;也就是说会返回 stdout,如果有的话,还会包含因而产生的rsync
调用结果。rsync_project()
接受以下参数:remote_dir
:是唯一必选的参数,指的需要同步的远程服务器目录。根据rsync
实现方式其具体行为取决于local_dir
的值。- If
local_dir
ends with a trailing slash, the files will be dropped inside ofremote_dir
. E.g.rsync_project("/home/username/project/", "foldername/")
will drop the contents offoldername
inside of/home/username/project
. 如果
local_dir
**没有**以斜线结尾(或者是没有指定local_dir
的默认情况下),将会以remote_dir
为父文目录创建一个名为local_dir
的子目录。也就是说rsync_project("/home/username", "foldername")
会在创建一个/home/username/foldername
目录,并将所有文件置于其中。
- If
local_dir
:默认情况下,rsync_project
使用当前工作目录作为源目录,你可以使用字符串参数local_dir
覆盖该设置。该参数会原封不动地传递给rsync
,因此它的值可以是单个目录("my_directory"
),或者多个目录("dir1 dir2"
)。详细用法请参阅rsync
的文档。exclude
:可选,可以是一个字符串,也可以是一个字符串的迭代器,用于向rsync
传递一个或多个--exclude
参数。delete
:用于设置rsync
的--delete
选项的参数。其值为 True 将会在远程删除本地已不存在的文件。默认值为 False。extra_opts
:可选参数,直接将可选参数传送给rsync
。ssh_opts
:类似于extra_opts
,但是仅限于 SSH 连接(rsync--rsh
参数)。capture
:直接传给local
内部调用。upload
:一个布尔值,用于控制文件同步设置是作为上游还是下游,默认是上游。default_opts
:默认 rsync 参数是-pthrvz
,你可以传递参数覆盖默认值(例如:你可以删除 verbosity 输出选项,等等)。
该函数遵循 Fabric 的端口和 SSH key 设置,如果当前主机的使用了非默认端口,或者
env.key_filename
变量非空时,将使用指定端口和/或 SSH key 文件。作为参考,这个函数构建出的
rsync
调用命令类似于下面这个:rsync [--delete] [--exclude exclude[0][, --exclude[1][, ...]]] \ [default_opts] [extra_opts] <local_dir> <host_string>:<remote_dir>
1.4.0 新版功能: 关键字参数
ssh_opts
。1.4.1 新版功能: 关键字参数
capture
。1.8.0 新版功能: 关键字参数
default_opts
。
-
fabric.contrib.project.
upload_project
(local_dir=None, remote_dir='', use_sudo=False)¶ 使用
tar
/gzip
将当前项目上传到远程系统中。local_dir
参数用于指定将要上传的本地项目路径,默认为当前工作目录。remote_dir
用于指定要上传的目标目录(也就是说会复制一份local_dir
目录作为remote_dir
的子目录),其默认值是用户的 home 目录。远程执行命令时可以设置
use_sudo
参数。use_sudo
为 True 的时候,将使用sudo
执行程序,否则使用run
。这个函数会调用
tar
和gzip
程序/库,因此在 Win32 系统上并没有得到很好的支持,除非使用了 Cygwin 之类的程序。该函数不论是非执行成功,都会将远程的 tar 文件清理干净。在 1.1 版更改: 新增关键字参数
local_dir
和remote_dir
。在 1.7 版更改: 新增关键字参数
use_sudo
。
参与 & 测试¶
我们欢迎高级用户 & 开发者提交并帮助修复 bug,或者帮助开发新功能。
运行 Fabric 的测试¶
Fabric 保持着 100% 的测试通过率,提交的补丁也应该尽可能包括相应的测试,以便于检验 & 合并。
开发 Fabric 时,最好建立一个独立的 virtualenv 环境来安装依赖并运行测试。
初次设置¶
在 GitHub 上 fork repository
把你 fork 出来的代码库 clone 到本地(例如:
git clone git@github.com:<your_username>/fabric.git
)cd fabric
virtualenv env
. env/bin/activate
pip install -r requirements.txt
python setup.py develop
运行测试¶
激活 virtualenv( . env/bin/activate
)并安装好依赖后可以这样运行测试:
nosetests tests/
你需要在 master
(或者正处理的 release 分支)运行测试,来保证自己的修改/测试没有问题。
如果你已经在 Fabric 代码库中运行过 python setup.py develop
,也可以执行:
fab test
它会额外执行 doctest,并提供彩色输出结果。