in #linux
I have come up with a minimalist approach to managing servers and applications running on them.
Here is how I built it.
It starts with a single Python script in a directory. I call it make.py and it works like this:
#!/usr/bin/env python import sys def main(): command = sys.argv[1:] if command == ["configure"]: print("TODO: configure the server") else: sys.exit(f"Unknown command {command}") if __name__ == "__main__": main()
It can be used to run different commands:
$ ./make.py Unknown command [] $ ./make.py run Unknown command ['run'] $ ./make.py configure TODO: configure the server
The configure command is intended to configure the server. If we are doing this at scale, we might want to use something like Puppet or Ansible. But we are not doing this at scale. We are taking a minimalist approach. We are just one person, managing one (or perhaps a few) servers, and we don't want the complexity of those tools. But we do want to automate.
Let's say we want to configure the hostname of our server. Assuming that we have SSH access to the server and that our user has sudo privileges, we can do it by logging into the server and running sudo hostnamectl set-hostname server.example.com.
Let's see how we can evolve our make.py script towards being able to do that. Let's first look at the code and then go over how it works:
#!/usr/bin/env python USER = "bob" HOSTNAME = "server.example.com" import subprocess import sys def main(): command = sys.argv[1:] if command == ["configure"]: configure() elif command == ["configure_server"]: configure_server() else: sys.exit(f"Unknown command {command}") def configure(): self = read_file(__file__) command([ "ssh", f"{USER}@{HOSTNAME}", "python", "-", "configure_server", ], stdin=self) def configure_server(): command(["ls", "-la", "."]) def command(command, stdin=None): subprocess.run(command, input=stdin, text=True) def read_file(path): with open(path) as f: return f.read() if __name__ == "__main__": main()
The first thing to notice is that we provide all server details in the script itself. Everything will live in this script. This is a minimalist approach. We don't want to deal with the complexity of configuration files.
Next, we have modified the configure command to instead of printing a TODO, run an SSH command. This command is executed on our client and looks like this:
ssh bob@server.example.com python - configure_server
This will run python on the server. The - argument to Python tells it to read the script from stdin. When we run the SSH command we read the current file (the make.py script) and pass it to stdin. So we effectively copy the whole make.py script to the server (without ever writing it to disk) and start executing it. The command that we execute is configure_server. So the configure command is intended to run on the client and the configure_server command is intended to run on the server.
When we run the configure command on the client, it looks like this:
$ ./make.py configure total 48 drwx------. 6 bob bob 4096 May 5 20:58 . drwxr-xr-x. 3 root root 4096 May 3 12:18 .. -rw-------. 1 bob bob 8 May 4 08:30 .bash_history
It is the Python process on the server that calls the ls command that in turn produces this output. This will only work if Python is installed on the server. That is a requirement of this minimalist approach.
Now we have a way to run commands on the server as a user. But we need to run commands as root (via sudo) to be able to configure the server. How do we do that? Let's evolve our script a little.
Again, let's first look at the code and then go over how it works:
#!/usr/bin/env python USER = "bob" USER_PASSWORD = "password" HOSTNAME = "server.example.com" import os import subprocess import sys import tempfile def main(): command = sys.argv[1:] if command == ["configure"]: configure() elif command == ["configure_server"]: configure_server() elif command == ["configure_server_root"]: configure_server_root() else: sys.exit(f"Unknown command {command}") def configure(): self = read_file(__file__) self_variable = f"SELF = {repr(self)}\n" command([ "ssh", f"{USER}@{HOSTNAME}", "python", "-", "configure_server", ], stdin=self_variable+self) def configure_server(): with tempfile.TemporaryDirectory(dir=".") as d: password_path = f"{d}/password" write_file(password_path, USER_PASSWORD) command(["chmod", "600", password_path]) password_cater_path = f"{d}/cat" write_file(password_cater_path, f"#!/usr/bin/env sh\ncat {password_path}") command(["chmod", "700", password_cater_path]) os.environ["SUDO_ASKPASS"] = password_cater_path command([ "sudo", "--askpass", "python", "-", "configure_server_root", ], stdin=SELF) def configure_server_root(): command(["hostnamectl", "set-hostname", HOSTNAME]) def command(command, stdin=None): print(command) sys.stdout.flush() subprocess.run(command, input=stdin, text=True) def read_file(path): with open(path) as f: return f.read() def write_file(path, content): with open(path, "w") as f: f.write(content) if __name__ == "__main__": main()
When we run the configure command on the client, it now looks like this:
$ ./make.py configure ['ssh', 'bob@server.example.com', 'python', '-', 'configure_server'] ['chmod', '600', '/home/bob/tmpjajwxp5r/password'] ['chmod', '700', '/home/bob/tmpjajwxp5r/cat'] ['sudo', '--askpass', 'python', '-', 'configure_server_root'] ['hostnamectl', 'set-hostname', 'server.example.com']
We have modified the command function to also print all commands that are run. This gives us a little more visibility into what happens. And we can indeed see that it configured the hostname correctly. Just as we wanted. How does it work?
In order for the user to issue a sudo command, it needs to provide its password. (At least if not passwordless sudo is configured. But I wanted this minimalist approach to support sudo with password.) Since we want to automate, we don't accept entering the password in a prompt. Instead, we put the password in a variable at the top of the script. Really? That makes it impossible to share the make.py script since we don't want to share our secrets. Yes, that is correct. For a minimalist approach, do we really need that? If we do, I'll show you later how we can do it.
The script now knows the password. It then needs to somehow provide it to sudo. One way to do that is to configure the SUDO_ASKPASS environment variable to point to a script that will write the password to stdout. This script will be executed by sudo if called with the --askpass flag.
Which program will print the password that we have configured in our make.py script? Well, there is no such program, so we create one. First we create a temporary directory where we write two files. The first is a file containing the password. The second is a shell script that prints this file to stdout.
So in our example run, we will write the following to /home/bob/tmpjajwxp5r/password:
password
And the following to /home/bob/tmpjajwxp5r/cat:
#!/usr/bin/env sh cat /home/bob/tmpjajwxp5r/password
Now we can run commands as sudo without having to enter our password manually. What command do we run to configure the server? We run this:
python - configure_server_root
We use the same trick again to run a different command from our make.py script. The command configure_server_root is intended to run on the server, as user root, and configure the server. And that is precisely what we do when we configure the hostname. This function can be further extended to configure more aspects of the server.
But how can the sudo process run the make.py script? The first time when we "copied" the make.py script to the server, we could read it from disk since we executed it from our client. Now we need to "copy" the script from the server to the sudo process. But the script is not available to the user running on the server, because it got the script via stdin.
The trick here is this:
self = read_file(__file__) self_variable = f"SELF = {repr(self)}\n" ... ], stdin=self_variable+self)
When we read the script from disk, we modify it before we send it to the server. We prepend one line which is this:
SELF = "#!/usr/bin..."
That is, we read the whole script into self. Then we turn that script into a Python string using repr, and assign it to the variable SELF. So the resulting script that we send to the server looks like this:
SELF = "#!/usr/bin..." #!/usr/bin...
The script that is passed to the sudo process comes from the SELF variable and will thus be the original script read from disk. That script lacks the SELF variable. So we can only use this trick one level. (The SELF variable also messes up the shebang, but since we don't use it on the server, that is fine.)
With this approach we have the full power of Python to configure our server.
The same trick with prepending the SELF variable can also be used to extract secrets to a separate file. Let's say that we don't want to include USER_PASSWORD in the make.py script. Let's extract it to a separate file (that we don't need to version control or share):
$ cat secrets.py USER_PASSWORD = "password"
Next we modify the code that constructs the script that is sent to the server to also prepend the secrets.py file:
self = read_file(__file__) self = read_file("secrets.py") + self self_variable = f"SELF = {repr(self)}\n" ... ], stdin=self_variable+self)
And now we can commit make.py without any secrets.
I came up with this minimalist approach when setting up a server to host wiki software that I wrote. It has now evolved to a make.py script that look like this:
$ ./make.py Usage: ./make.py [--local] deploy ./make.py [--local] backup ./make.py [--local] restore ./make.py [--user-sudo] configure ./make.py [--user-sudo] tail ./make.py [--user-sudo] ospatch ./make.py [--user-sudo] shell fail2ban status: ./make.py shell fail2ban-client status ./make.py shell fail2ban-client status sshd ./make.py shell fail2ban-client banned ./make.py shell tail -f /var/log/fail2ban.log ./make.py shell journalctl -u fail2ban -f -n100 fail2ban show active bans: ./make.py shell firewall-cmd --list-all ./make.py shell firewall-cmd --list-rich-rules fail2ban config: ./make.py shell cat /etc/fail2ban/jail.conf | vim - ./make.py shell cat /etc/fail2ban/fail2ban.conf | vim - show last logins/failed logins: ./make.py shell last ./make.py shell lastb
The ospatch target, for example, is defined like this:
@target(default_where="--user-sudo") def ospatch(args): command(["systemctl", "stop", WIKI_SERVICE]) command(["dnf", "update", "-y"]) command(["systemctl", "reboot"])
Every target that we define in make.py can run either locally, on the server as a regular user, or on the server as root user via sudo. It uses the same mechanism that I have explained in this blog post.
Part of the configure target sets up nginx and looks like this:
@target(default_where="--user-sudo") def configure(args): ... command(["dnf", "-y", "install", "nginx"]) write_file(NGINX_CONFIG_PATH, NGINX_CONFIG, diff=True) write_file(NGINX_CONFIG_PATH_PEM, NGINX_PEM, diff=True) command(["chmod", "600", NGINX_CONFIG_PATH_PEM]) write_file(NGINX_CONFIG_PATH_KEY, NGINX_KEY, diff=True) command(["chmod", "600", NGINX_CONFIG_PATH_KEY]) command(["systemctl", "enable", "nginx"]) command(["systemctl", "start", "nginx"]) command(["systemctl", "reload", "nginx"]) ...
With this approach the config files are always written and the service is always reloaded regardless of whether it is needed or not. This might be considered wasteful or ugly. Furthermore, we have to make sure that every command can run over and over again. (mkdir -p xxx instead of mkdir xxx for example.) But in practice it works well for a minimalist scenario. However, since we have the full power of Python we could write code something like this assuming that write_file returns True if the file content was changed:
if write_file(NGINX_CONFIG_PATH_KEY, NGINX_KEY, diff=True): command(["systemctl", "reload", "nginx"])
However, now the script starts becoming more complex. And if we really need this, maybe we need Puppet or Ansible instead. Maybe.
The make.py script also features a shell target that runs the given command. By default as the root user via sudo. In the usage I have included some shell commands that I use to monitor the server. I can easily copy and paste different commands to monitor different aspects, and the usage serves as a reminder what I usually do.
However, I monitor manually and only on occasion. But I would like to automate monitoring and alerting. The next step in this minimalist approach is to figure out how to do that. We probably don't want to use Prometheus and Grafana. They are too heavy and probably overkill for our purposes. But what does a minimalist approach look like? If I come up with one, I will write about it.
What is Rickard working on and thinking about right now?
Every month I write a newsletter about just that. You will get updates about my current projects and thoughts about programming, and also get a chance to hit reply and interact with me. Subscribe to it below.