Migrating EOL CentOS 8 to AppStream

In the same way a cobbler’s children have no shoes, I had an old computer sitting on my network running CentOS 8.3. A frame of reference since you’re reading this in the future, this is being written 2 months after CentOS 8 went EOL, and a little over a year since 8.4 was released. It wasn’t doing anything other than holding harddrives, and it didn’t have a network cable plugged into it for a while, so it wasn’t much of a threat.

Necessary disclaimer, I hope you’re not doing this on production.

Repo Error

It isn’t too hard to find information on how to migrate CentOS 8 to AppStream such as this (article at Linode)[https://www.linode.com/docs/guides/migrate-from-centos-8-to-centos-stream/], but if you’re reading it now, you’re so far behind you’re going to get an error trying to get follow the steps.

[root@core2 dan]# dnf info centos-release-stream
CentOS Linux 8 - AppStream                                                             115  B/s |  38  B     00:00
Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

This is caused by the repos no longer being available because of how out of date it is.

Sed Fix

A few quick seds on the repo files will fix it right up. These will change the mirror.centos.org URL to vault.centos.org to allow pulling the required packages straight from the CentOS vault, rather than checking non-existant mirrors.

[root@core2 dan]# sed -i 's/mirror\.centos/vault\.centos/g' CentOS*.repo
[root@core2 dan]# sed -i 's/^mirrorlist/#mirrorlist/g' CentOS*.repo
[root@core2 dan]# sed -i 's/^#baseurl/baseurl/g' CentOS*.repo

Once that is complete you can continue on with the migration. These repo files will be modified to append .rpmsave and will be ignored by DNF when adding the stream repos.

Podman for container management

After a look at all the random virtual machines running across a few systems on my home network, I decided its really time to start migrating from VMs to containers rather than having mulitple VMs stood up each for their own task. I’ve used containers for specific instances such as testing software from an official image off DockerHub, or running builds in a CI system, but generally stick to VMs for isolation and familiarity with the workflow.

Podman is a “daemonless container engine” that comes installed on Fedora workstation. Theres plenty qualified sources of information to get more details, but a typical user can look at it almost as a drop in replacement for running docker containers. Behind the scenes theres quite a differences between podman and docker, but that would quickly go beyond the scope of this post and is better left to the more qualified sources. The primary thing to keep in mind is it is daemonless, meaning there is no seperate service required to be running, and allows a user to create containers without elevated or specific privileges.

Basics

Containers

Podman has containers, which are run as a process rather than on a daemon. Containers are the running process of an image. These can be interacted with the same commands as docker such as run to run a container and exec to run a command on a running container.

Images

Images hold the same meaning in Podman as they do in Docker. Images are the compilation of layers of commands, filesystem, etc, to make up an… image. An image is the definition of a container. A container is a running image. Its layers all the way down. Again, same commands as docker such as pull, push, list.

Pods

Pods are where podman will differ for someone with a bit of familiarity with docker, but not enough to have dug into something like Kubernetes. Pods are a group of containers in a single namespace. The containers in a pod are the containers that are linked and communicating. Pods also include another container, the infra container. This container does nothing but sleep, and does so to keep the pod running even if no other containers are running. Theres an excellent bit of information from podman.

Podman-compose

Podman-compose doesn’t offer complete parity, but for most users this will probably be fine. Like docker-compose, podman-compose stands up a container(s) defined in a yaml file.

The default networking in podman-compose runs all the containers in a single pod. To see how well it works, you can give it a shot with an example straight from the docker-compose documentation using wordpress.

The docker-compose.yml file uses a mysql and wordpress image stand up a basic WordPress installation in two containers. This is a good example for exposing an HTTP port to the wordpress container, as well as a network connection between the two for database access.

version: '3.3'

services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress

wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}

podman-compose up

Podman-compose up runs containers with the images and attributes defined in the docker-compose.yml file. Adding -d runs the containers in detached mode so the containers will detach from the command once they run successfully.

One interesting note of the output below is the translation for mounting the volumes, the volumes are in a namespaced directory in /home/dan/.local/share/containers/storage. The volume is in the user’s home directory and not in /var as docker does by default. This is a good thing on a laptop/desktop/workstation where /home is typically a large partiton in comparison to /var.

dan@host:~/Projects/example_wordpress$ podman-compose up -d
podman pod create --name=example_wordpress --share net -p 8000:80
98810f0d9df2ca8faec58d05445f7aa36e3f8a7f285b893e5829155753cea6f8
0
podman volume inspect example_wordpress_db_data || podman volume create example_wordpress_db_data
Error: no volume with name "example_wordpress_db_data" found: no such volume
podman run --name=example_wordpress_db_1 -d --pod=example_wordpress --label io.podman.compose.config-hash=123 --label io.podman.compose.project=example_wordpress --label io.podman.compose.ve
rsion=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e MYSQL_ROOT_PASSWORD=somewordpress -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MY
SQL_PASSWORD=wordpress --mount type=bind,source=/home/dan/.local/share/containers/storage/volumes/example_wordpress_db_data/_data,destination=/var/lib/mysql,bind-propagation=z --add-host db:
127.0.0.1 --add-host example_wordpress_db_1:127.0.0.1 --add-host wordpress:127.0.0.1 --add-host example_wordpress_wordpress_1:127.0.0.1 mysql:5.7
f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26
0
podman run --name=example_wordpress_wordpress_1 -d --pod=example_wordpress --label io.podman.compose.config-hash=123 --label io.podman.compose.project=example_wordpress --label io.podman.com
pose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=wordpress -e WORDPRESS_DB_HOST=db:3306 -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASS
WORD=wordpress -e WORDPRESS_DB_NAME=wordpress --add-host db:127.0.0.1 --add-host example_wordpress_db_1:127.0.0.1 --add-host wordpress:127.0.0.1 --add-host example_wordpress_wordpress_1:127.
0.0.1 wordpress:latest
ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349
0

Podman pods

Running containers are viewed with ps. Note there are two, wordpress:latest and mysql:5.7, as expected from the compose file. The output below uses the --format option to output only a few of the details to make this easier to read.

dan@host:~/Projects/example_wordpress$ podman ps --format "table {{.ID}} {{.Image}} {{.Status}} {{.Ports}} {{.Names}}"
ID             Image                                Status              Ports                     Names
ae402644bb0a   docker.io/library/wordpress:latest   Up 3 minutes ago    0.0.0.0:8000->80/tcp   example_wordpress_wordpress_1
f3fa682c5a7a   docker.io/library/mysql:5.7          Up 3 minutes ago    0.0.0.0:8000->80/tcp   example_wordpress_db_1

That example_wordpress is included in in the container names, which is the namespace the containers are running in, named by podman-compose after the directory where podman-compose was executed. Pods can be viewed with podman pod list, and more details can be viewed with the inspect command as demonstrated with the example_wordpress pod below.

The pod list displays there are three running containers on the example_wordpress pod even though only two images were defined in the docker-compose.yml file. Also podman ps displayed there were only two containers running. It also includes the INFRA ID column with the beginning of a SHA.

dan@host:~/Projects/example_wordpress$ podman pod list
POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Running   3 minutes ago    3                 ad6ee2217602

Running inspect provides more info about what containers are running in that pod and their IDs.

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.Containers'
[
{
"id": "ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5",
"state": "running"
},
{
"id": "ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349",
"state": "running"
},
{
"id": "f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26",
"state": "running"
}
]

As mentioned in the beginning of this post, pods have an infra container that sleeps to keep the pod running. This is included in the full output if above didn’t filter with jq. The infra container is given in the state information and it matches the INFRA ID column.

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.State.infraContainerID'
"ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5"

That container can be viewed in the container list using --filter.

dan@host:~/Projects/example_wordpress$ podman container list -a --filter id=ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5
CONTAINER ID  IMAGE                 COMMAND  CREATED         STATUS             PORTS                 NAMES
ad6ee2217602  k8s.gcr.io/pause:3.1           5 minutes ago   Up 5 minutes ago   0.0.0.0:8000->80/tcp  98810f0d9df2-infra

Cool. Remember that container is running with the intention of keeping the pod alive even if no services are. That can be seen in action. Note for those unfamiliar, you can specify just the first few characters of the SHA to identify a container, rather than using the full SHA.

Lets kill our containers that aren’t infra.

dan@host:~/Projects/example_wordpress$ podman container stop ae40264
ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349

dan@host:~/Projects/example_wordpress$ podman container stop f3fa68
f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26

And take another look to make sure they’re exited.

dan@host:~/Projects/example_wordpress$ podman-compose ps
podman ps --filter label=io.podman.compose.project=example_wordpress
CONTAINER ID  IMAGE                               COMMAND               CREATED         STATUS                     PORTS                 NAMES
ae402644bb0a  docker.io/library/wordpress:latest  apache2-foregroun...  5 minutes ago   Exited (0) 23 seconds ago  0.0.0.0:8000->80/tcp  example_wordpress_wordpress_1
f3fa682c5a7a  docker.io/library/mysql:5.7         mysqld                5 minutes ago   Exited (0) 4 seconds ago   0.0.0.0:8000->80/tcp  example_wordpress_db_1

Now view the pod to see if it’s still running. It should be (and is) thanks to the infra container.

dan@host:~/Projects/example_wordpress$ podman pod list

POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Running   5 minutes ago   3                 ad6ee2217602

And check out the containers via inspect on the pod again. Only one is running (filtered the output with jq, but it would show the container IDs as well).

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.Containers[].state'
"running"
"exited"
"exited"

Now kill the infra container…

dan@host:~/Projects/example_wordpress$ podman container stop ad6ee
ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5

…and the pod is finally exited.

dan@host:~/Projects/example_wordpress$ podman pod list
POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Exited    5 minutes ago    3                 ad6ee2217602

Conclusion

Fun stuff all around. Thanks to podman being installed on fresh Fedora Workstation, and not requiring elevated privleges and a daemon, it’s a great way to get in digging around and using containers. Having almost the exact same functionality and parameters to docker makes it easy to transfer skill from podman to docker or the other way around.

Jenkins Configuration as Code

A recent problem I had to solve was how to mirror a Jenkins instance with pretty restrictive permissions in order for our team to be able to duplicate jobs using Jenkins Job builder. I’ve installed and configured Jenkins more times than I can count, manually and through configuration management. But for most Software Developers this task is more intimidating, and even more uninteresting, than I find it. I took this opportunity to learn about Jenkins Configuration as Code (JCasC) as a possible solution and am more than pleased with the results. With JCasC (and JJB or Pipelines), you can go from a fresh Jenkins install to running jobs without ever even logging into the Jenkins UI.

The problem

In order to modify, improve, or add Jenkins jobs to existing Jenkins Job Builder templates, we needed a way to mirror our production Jenkins instance locally to ensure what we thought we were doing was what was actually going to happen. I’m not going to go into details, but pre-baking images was an issue in a repository to hold the images, and using docker images runs into issues when using a VPN as well as just enough cross-platform difficulties.

How it works

Jenkins Configuration as Code is (as per their documentation) “an opinionated way to configure jenkins based on human-readable declarative files.” This provides a way to declare Jenkins configuration via a yaml file without even clicking in the UI. JSasC uses plugins to configure Jenkins based on a yaml file declared by an environment variable.

Plugins

The plugins configuration-as-code and configuration-as-code-support are required to read and configure Jenkins based on a yaml file.

Yaml file

See the documentation for examples and more information on viewing the yaml schema.

Solution Overview

Anyone familiar with Jenkins can see the chicken and egg problem here. In order to install plugins, you first need to configure Jenkins. In order to configure Jenkins, you need to log in to the UI and install plguins. While you can do this and already see huge benefits with JSasC (install Jenkins, install JCasC plugin, point at yaml file), there is an even easier way. Here’s how I got around this.

  1. Install Jenkins
  2. Move over the JCasC yaml file to a readable place for the jenkins user
  3. Install generic config.xml in /var/lib/jenkins/config.xml
  4. Add CASC_JENKINS_CONFIG to /etc/sysconfig/jenkins to point to the yaml file
  5. Start Jenkins
  6. Using server configuration management (such as jenkins_plugins in Ansible), install required plugins
  7. Restart Jenkins
  8. Profit

More details

JCasC can use Kubernetes-secrets, Docker-secrets, or Vault for secrets management. There’s a bit more configuration there, and I haven’t used it. But, using templating and environment variables with Ansible can result in writing a configuration file with secrets only being written within the VM on your local workstation (you are using an encrypted filesystem or ${HOME} directory, right?).

Molecule for existing Ansible roles

I previously walked through Ansible role creation with Molecule, but it’s just as easy to add to existing roles. Creating a Molecule scenario to test an existing role allows for easy testing and modification of that role with all the benefits that Molecule provides.

Existing role

For another easy example, we’ll just use a simple role that installs a webserver. To prevent a complete copy/paste, this time we will be using Apache rather than Nginx. To show the existing role in its current state:

~/Projects/example_playbooks/apache_install$ tree
.
└── tasks
    └── main.yml

1 directory, 1 file
~/Projects/example_playbooks/apache_install$ cat tasks/main.yml 
---
# install and start apache    
- name: install apache
  yum:
    name: httpd
    state: present
  become: "yes"

- name: ensure apache running and enabled
  systemd:
    name: httpd
    state: started
    enabled: "yes"
  become: "yes"

The Molecule and Ansible version used for this example is:

~/Projects/example_playbooks/apache_install$ ansible --version && molecule --version
ansible 2.6.4
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/dan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /home/dan/.local/lib/python2.7/site-packages/ansible
  executable location = /home/dan/.local/bin/ansible
  python version = 2.7.12 (default, Dec  4 2017, 14:50:18) [GCC 5.4.0 20160609]
molecule, version 2.17.0

Init scenario

Because the role already exists, we will only be creating scenario, rather than a whole new role. The init scenario parameters are almost exactly the same as init role, and result in the same directory structure as if we created the role with molecule.

Molecule init scenario usage information:

~/Projects/example_playbooks/apache_install$ molecule init scenario --help
Usage: molecule init scenario [OPTIONS]

  Initialize a new scenario for use with Molecule.

Options:
  --dependency-name [galaxy]      Name of dependency to initialize. (galaxy)
  -d, --driver-name [azure|delegated|docker|ec2|gce|lxc|lxd|openstack|vagrant]
                                  Name of driver to initialize. (docker)
  --lint-name [yamllint]          Name of lint to initialize. (ansible-lint)
  --provisioner-name [ansible]    Name of provisioner to initialize. (ansible)
  -r, --role-name TEXT            Name of the role to create.  [required]
  -s, --scenario-name TEXT        Name of the scenario to create. (default)
                                  [required]
  --verifier-name [goss|inspec|testinfra]
                                  Name of verifier to initialize. (testinfra)
  --help                          Show this message and exit.

We create the scenario using the existing role name and specifying using vagrant as the driver. Once initialized, the Molecule directory structure will be the same as if we created the role with Molecule, but without any role directories being created (such as handlers, meta, etc).

~/Projects/example_playbooks/apache_install$ molecule init scenario --role-name apache_install --driver-name vagrant
--> Initializing new scenario default...
Initialized scenario in /home/dan/Projects/example_playbooks/apache_install/molecule/default successfully.
~/Projects/example_playbooks/apache_install$ tree
.
├── molecule
│   └── default
│       ├── INSTALL.rst
│       ├── molecule.yml
│       ├── playbook.yml
│       ├── prepare.yml
│       └── tests
│           └── test_default.py
└── tasks
    └── main.yml

4 directories, 6 files

Configuration

The Molecule configuration will be the default provided by molecule. As done previously, I edit this to use CentOS 7, rather than the default Ubuntu 16.04. Additionally, I update the name of the VM to something different to distinguish the VM if needed.

In this example our tests are very similiar to my previous example. The primary (and possibly only) differences in our tests from the previous example is we’re testing for the httpd service rather than nginx.

~/Projects/example_playbooks/apache_install$ cat molecule/default/molecule.yml 
---
dependency:
  name: galaxy
driver:
  name: vagrant
  provider:
    name: virtualbox
lint:
  name: yamllint
platforms:
  - name: apache
    box: centos/7
provisioner:
  name: ansible
  lint:
    name: ansible-lint
scenario:
  name: default
verifier:
  name: testinfra
  lint:
    name: flake8
~/Projects/example_playbooks/apache_install$ cat molecule/default/tests/test_default.py
import os

import testinfra.utils.ansible_runner

testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
    os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')


def test_apache_installed(host):
    apache = host.package("httpd")
    assert apache.is_installed


def test_apache_config(host):
    apache = host.file('/etc/httpd/conf/httpd.conf')
    assert apache.exists


def test_apache_running_and_enabled(host):
    apache = host.service("httpd")
    assert apache.is_running
    assert apache.is_enabled

Molecule test

Since we’ve updated our Molecule configuration to use the Vagrant box we want, and updated our tests to ensure that our role is doing what we want, we can now run any of the Molecule commands (test, create, converge, etc) just as we would if we would have started the role using Molecule.

~/Projects/example_playbooks/apache_install$ molecule test
--> Validating schema /home/dan/Projects/example_playbooks/apache_install/molecule/default/molecule.yml.
Validation completed successfully.
--> Test matrix

└── default
    ├── lint
    ├── destroy
    ├── dependency
    ├── syntax
    ├── create
    ├── prepare
    ├── converge
    ├── idempotence
    ├── side_effect
    ├── verify
    └── destroy

--> Scenario: 'default'
--> Action: 'lint'
--> Executing Yamllint on files found in /home/dan/Projects/example_playbooks/apache_install/...
Lint completed successfully.
--> Executing Flake8 on files found in /home/dan/Projects/example_playbooks/apache_install/molecule/default/tests/...
Lint completed successfully.
--> Executing Ansible Lint on /home/dan/Projects/example_playbooks/apache_install/molecule/default/playbook.yml...
Lint completed successfully.
--> Scenario: 'default'
--> Action: 'destroy'

    PLAY [Destroy] *****************************************************************

    TASK [Destroy molecule instance(s)] ********************************************
    changed: [localhost] => (item=None)
    changed: [localhost]

    TASK [Populate instance config] ************************************************
    ok: [localhost]

    TASK [Dump instance config] ****************************************************
    changed: [localhost]

    PLAY RECAP *********************************************************************
    localhost                  : ok=3    changed=2    unreachable=0    failed=0


--> Scenario: 'default'
--> Action: 'dependency'
Skipping, missing the requirements file.
--> Scenario: 'default'
--> Action: 'syntax'

    playbook: /home/dan/Projects/example_playbooks/apache_install/molecule/default/playbook.yml

--> Scenario: 'default'
--> Action: 'create'

    PLAY [Create] ******************************************************************

    TASK [Create molecule instance(s)] *********************************************
    changed: [localhost] => (item=None)
    changed: [localhost]

    TASK [Populate instance config dict] *******************************************
    ok: [localhost] => (item=None)
    ok: [localhost]

    TASK [Convert instance config dict to a list] **********************************
    ok: [localhost]

    TASK [Dump instance config] ****************************************************
    changed: [localhost]

    PLAY RECAP *********************************************************************
    localhost                  : ok=4    changed=2    unreachable=0    failed=0


--> Scenario: 'default'
--> Action: 'prepare'

    PLAY [Prepare] *****************************************************************

    TASK [Install python for Ansible] **********************************************
    ok: [apache]

    PLAY RECAP *********************************************************************
    apache                     : ok=1    changed=0    unreachable=0    failed=0


--> Scenario: 'default'
--> Action: 'converge'

    PLAY [Converge] ****************************************************************

    TASK [Gathering Facts] *********************************************************
    ok: [apache]

    TASK [apache_install : install apache] *****************************************
    changed: [apache]

    TASK [apache_install : ensure apache running and enabled] **********************
    changed: [apache]

    PLAY RECAP *********************************************************************
    apache                     : ok=3    changed=2    unreachable=0    failed=0


--> Scenario: 'default'
--> Action: 'idempotence'
Idempotence completed successfully.
--> Scenario: 'default'
--> Action: 'side_effect'
Skipping, side effect playbook not configured.
--> Scenario: 'default'
--> Action: 'verify'
--> Executing Testinfra tests found in /home/dan/Projects/example_playbooks/apache_install/molecule/default/tests/...
    ============================= test session starts ==============================
    platform linux2 -- Python 2.7.12, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
    rootdir: /home/dan/Projects/example_playbooks/apache_install/molecule/default, inifile:
    plugins: testinfra-1.14.1
collected 3 items                                                              

    tests/test_default.py ...                                                [100%]

    =========================== 3 passed in 5.62 seconds ===========================
Verifier completed successfully.
--> Scenario: 'default'
--> Action: 'destroy'

    PLAY [Destroy] *****************************************************************

    TASK [Destroy molecule instance(s)] ********************************************
    changed: [localhost] => (item=None)
    changed: [localhost]

    TASK [Populate instance config] ************************************************
    ok: [localhost]

    TASK [Dump instance config] ****************************************************
    changed: [localhost]

    PLAY RECAP *********************************************************************
    localhost                  : ok=3    changed=2    unreachable=0    failed=0

Conclusion

Molecule not only provides great defaults and a consistent directory structure when creating a new role, but also makes it easy and efficient to add a Molecule workflow for testing existing roles. Adding Molecule scenarios to existing roles is simple and efficient for testing existing roles across Operating Systems and Ansible versions to improve their reliability.

Mailx for cron email

Cron defaults to sending job outputs to the owner’s mail, or the mail set in a MAILTO variable, or direct to syslog when sendmail is not installed. If the server does not have a mail server running, or there are issues such as the server being in a network or configured to specifically not send email, or is unable to send email to a particular server or service, this can cause a problem. In order to get around the issue of mail not being accepted by some third parties as I described in a previous post, emails sent by cron can instead use an Simple Mail Transport Protocol (SMTP) client to send through an external Mail Transfer Agent (MTA).

mailx

After a bit of searching, I found mailx provides a method for connecting to an external SMTP server with simple configuration. According to the man page, mailx is an intelligent mail processing system [...] intended to provide the functionality of the POSIX mailx command, and offers extension for MIME, IMAP, POP3, SMTP and S/MIME.

Installation

Installation was completed on a CentOS 7 VPS instance. mailx is available in the base repository and can be installed with a simple yum command

# yum install mailx

mailx configuration

Installation creates a default /etc/mail.rc file. You can then review the man page via man mailx to review further configuration options. Since the plan is to use it for SMTP, searching for smtp provides relevant options.

I’m using Gmail, and the documentation from Google for email client configuration provided the required SMTP host:TLS-Port combination of smtp.gmail.com:587.

For the smtp-auth-password, I can’t use my own password since I’ve got 2-Step-Verification enabled on my account. The server simply wouldn’t be able to send email if I had to verify it and provide a code each time. Gmail allows a method around this of using App Passwords for email clients that cannot use two factor authentication. Creating an app password is just a couple of steps. Each server or client using an App Password should have its own unique password. A unique app password for each application requiring one helps to provide logs of its use, as well as easily revoking the app password if needed.

We can test our configuration as we go along with the following command:

# echo "mailx test" | mailx -s "Test Email" <EMAIL_ADDRESS>

The first round of doing that gave an error:

# smtp-server: 530 5.7.0 Must issue a STARTTLS command first. 207-v6sm21173418oie.14 - gsmtp
"/root/dead.letter" 11/308
. . . message not sent.

Easy enough of a resolution, another look at the man page or quick grep shows us that we need to include smtp-use-starttls

# man mailx | grep -ie starttls | grep -i smtp
       smtp-use-starttls
              Causes mailx to issue a STARTTLS command to make an SMTP session SSL/TLS encrypted.  Not all servers support this command; because of common implementation defects, it cannot be automatically
              There  are  two  possible methods to get SSL/TLS encrypted SMTP sessions: First, the STARTTLS command can be used to encrypt a session after it has been initiated, but before any user-
              related data has been sent; see smtp-use-starttls above.  Second, some servers accept sessions that are encrypted from  their  beginning  on.  This  mode  is  configured  by  assigning

After updating the configuration, I found another error.

# Missing "nss-config-dir" variable.
"/root/dead.letter" 11/308
. . . message not sent.

To resolve that, I just looked for an nss* in /etc/ (from knowing that SSL information/certs are located there) and added that in the configuration.

# find /etc -type d -name "nss*"
/etc/pki/nssdb

Then I got yet another error:

# Error in certificate: Peer's certificate issuer is not recognized.
Continue (y/n)? SSL/TLS handshake failed: Peer's certificate issuer is not recognized.
"/root/dead.letter" 11/308
. . . message not sent.

Time for a bit more sleuthing. For whatever reason, the certificate issuer was not recognized and asked for manual intervention. After some searching around I figured it might be due to Google’s new(ish) CA, but trying to add it to the PKI trusted CAs directly didn’t help. Eventually I found a page for adding these certs directly, but in order to just get the configuration running I opted for laziness and to set ssl-verify to ignore, with the intention of adding this as an ansible role at a later point.

Finally, we have the configuration below.

# cat /etc/mail.rc
set from=<YOUR_EMAIL_ADDRESS>
set smtp-use-starttls
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-auth=login
set smtp=smtp://smtp.gmail.com:587
set smtp-auth-user=<YOUR_GMAIL_USER>
set smtp-auth-password=<YOUR_APP_PASSWORD>

Running the testing command with these configuration settings results in a new email showing up in our inbox.

cron configuration

In order of for cron to use mailx, we need to do two things. First, cron will only send mail if the the MAILTO is set. We can add that directly into crontab with crontab -e, and adding the MAILTO variable. After, we’ll see it included in crontab -l.

And to test this, we should set up a cron job that provides output (also using crontab -e)

# crontab -l
MAILTO="<YOUR_EMAIL>"
* * * * * /usr/sbin/ip a

We also need to set crond to use mailx by editng the crond configuration to specify using /usr/bin/mailx to send mail, with the -t flag sent to mailx to use the To: header to address the email. After editing /etc/sysconfig/crond, restart crond.

# cat /etc/sysconfig/crond 
# Settings for the CRON daemon.
# CRONDARGS= :  any extra command-line startup arguments for crond
CRONDARGS=-m "/usr/bin/mailx -t"
# systemctl restart crond

Testing configuration

The crontab should now send the output of ip a to <YOUR_EMAIL> every minute. Once you’ve verified, be sure remove that job to prevent flooding your inbox.

If you don’t see a new email, take a look at the system logs to see entries from the crond service in reversed order (newest entries first).

# journalctl -r --unit crond

Because of the certificate issue noted above, and because mailx strips headers before sending mail, the following output may be included in the journald logs even on a successful mail.

Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Error in certificate: Peer's certificate issuer is not recognized.
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <USER=root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <LOGNAME=root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <PATH=/usr/bin:/bin>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <HOME=/root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <SHELL=/bin/sh>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <MAILTO=<YOUR_EMAIL>>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <LANG=en_US.UTF-8>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/0>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <XDG_SESSION_ID=71363>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Precedence: bulk"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Auto-Submitted: auto-generated"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Content-Type: text/plain; charset=UTF-8"

And looking at the email source we should see something like the following (note, I did not include all output in the example below):

Return-Path: <YOUR_EMAIL>
Received: from <YOUR_HOST> ([<YOUR_IP_ADDRESS>])
        by smtp.gmail.com with ESMTPSA id <REDACTED>
        for <YOUR_EMAIL>
        (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 03 Sep 2018 12:35:01 -0700 (PDT)
Message-ID: <MESSAGE_ID>
From: "(Cron Daemon)" <YOUR_EMAIL>
X-Google-Original-From: "(Cron Daemon)" <root>
Date: Mon, 03 Sep 2018 19:35:01 +0000
To: <YOUR_EMAIL>
Subject: Cron <root@YOUR_HOST> /usr/sbin/ip a
User-Agent: Heirloom mailx 12.5 7/5/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
...

Now we can be sure our cron jobs mail us through an external SMTP server that will successfully deliver to our third party service. And with mailx configured we can easily add an email component for any scripts we might want to run.

Of note, Google App suite does provide access to a SMTP relay that specifically that allows sending emails to email addresses either inside or outside of your domain. There are some limitations on the number of emails that can be sent based on the number of licenses in your account, but for my purposes and imposed limits, configuring mailx was a suitable solution.