Podman for container management

After a look at all the random virtual machines running across a few systems on my home network, I decided its really time to start migrating from VMs to containers rather than having mulitple VMs stood up each for their own task. I’ve used containers for specific instances such as testing software from an official image off DockerHub, or running builds in a CI system, but generally stick to VMs for isolation and familiarity with the workflow.

Podman is a “daemonless container engine” that comes installed on Fedora workstation. Theres plenty qualified sources of information to get more details, but a typical user can look at it almost as a drop in replacement for running docker containers. Behind the scenes theres quite a differences between podman and docker, but that would quickly go beyond the scope of this post and is better left to the more qualified sources. The primary thing to keep in mind is it is daemonless, meaning there is no seperate service required to be running, and allows a user to create containers without elevated or specific privileges.

Basics

Containers

Podman has containers, which are run as a process rather than on a daemon. Containers are the running process of an image. These can be interacted with the same commands as docker such as run to run a container and exec to run a command on a running container.

Images

Images hold the same meaning in Podman as they do in Docker. Images are the compilation of layers of commands, filesystem, etc, to make up an… image. An image is the definition of a container. A container is a running image. Its layers all the way down. Again, same commands as docker such as pull, push, list.

Pods

Pods are where podman will differ for someone with a bit of familiarity with docker, but not enough to have dug into something like Kubernetes. Pods are a group of containers in a single namespace. The containers in a pod are the containers that are linked and communicating. Pods also include another container, the infra container. This container does nothing but sleep, and does so to keep the pod running even if no other containers are running. Theres an excellent bit of information from podman.

Podman-compose

Podman-compose doesn’t offer complete parity, but for most users this will probably be fine. Like docker-compose, podman-compose stands up a container(s) defined in a yaml file.

The default networking in podman-compose runs all the containers in a single pod. To see how well it works, you can give it a shot with an example straight from the docker-compose documentation using wordpress.

The docker-compose.yml file uses a mysql and wordpress image stand up a basic WordPress installation in two containers. This is a good example for exposing an HTTP port to the wordpress container, as well as a network connection between the two for database access.

version: '3.3'

services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress

wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}

podman-compose up

Podman-compose up runs containers with the images and attributes defined in the docker-compose.yml file. Adding -d runs the containers in detached mode so the containers will detach from the command once they run successfully.

One interesting note of the output below is the translation for mounting the volumes, the volumes are in a namespaced directory in /home/dan/.local/share/containers/storage. The volume is in the user’s home directory and not in /var as docker does by default. This is a good thing on a laptop/desktop/workstation where /home is typically a large partiton in comparison to /var.

dan@host:~/Projects/example_wordpress$ podman-compose up -d
podman pod create --name=example_wordpress --share net -p 8000:80
98810f0d9df2ca8faec58d05445f7aa36e3f8a7f285b893e5829155753cea6f8
0
podman volume inspect example_wordpress_db_data || podman volume create example_wordpress_db_data
Error: no volume with name "example_wordpress_db_data" found: no such volume
podman run --name=example_wordpress_db_1 -d --pod=example_wordpress --label io.podman.compose.config-hash=123 --label io.podman.compose.project=example_wordpress --label io.podman.compose.ve
rsion=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=db -e MYSQL_ROOT_PASSWORD=somewordpress -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MY
SQL_PASSWORD=wordpress --mount type=bind,source=/home/dan/.local/share/containers/storage/volumes/example_wordpress_db_data/_data,destination=/var/lib/mysql,bind-propagation=z --add-host db:
127.0.0.1 --add-host example_wordpress_db_1:127.0.0.1 --add-host wordpress:127.0.0.1 --add-host example_wordpress_wordpress_1:127.0.0.1 mysql:5.7
f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26
0
podman run --name=example_wordpress_wordpress_1 -d --pod=example_wordpress --label io.podman.compose.config-hash=123 --label io.podman.compose.project=example_wordpress --label io.podman.com
pose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=wordpress -e WORDPRESS_DB_HOST=db:3306 -e WORDPRESS_DB_USER=wordpress -e WORDPRESS_DB_PASS
WORD=wordpress -e WORDPRESS_DB_NAME=wordpress --add-host db:127.0.0.1 --add-host example_wordpress_db_1:127.0.0.1 --add-host wordpress:127.0.0.1 --add-host example_wordpress_wordpress_1:127.
0.0.1 wordpress:latest
ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349
0

Podman pods

Running containers are viewed with ps. Note there are two, wordpress:latest and mysql:5.7, as expected from the compose file. The output below uses the --format option to output only a few of the details to make this easier to read.

dan@host:~/Projects/example_wordpress$ podman ps --format "table {{.ID}} {{.Image}} {{.Status}} {{.Ports}} {{.Names}}"
ID             Image                                Status              Ports                     Names
ae402644bb0a   docker.io/library/wordpress:latest   Up 3 minutes ago    0.0.0.0:8000->80/tcp   example_wordpress_wordpress_1
f3fa682c5a7a   docker.io/library/mysql:5.7          Up 3 minutes ago    0.0.0.0:8000->80/tcp   example_wordpress_db_1

That example_wordpress is included in in the container names, which is the namespace the containers are running in, named by podman-compose after the directory where podman-compose was executed. Pods can be viewed with podman pod list, and more details can be viewed with the inspect command as demonstrated with the example_wordpress pod below.

The pod list displays there are three running containers on the example_wordpress pod even though only two images were defined in the docker-compose.yml file. Also podman ps displayed there were only two containers running. It also includes the INFRA ID column with the beginning of a SHA.

dan@host:~/Projects/example_wordpress$ podman pod list
POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Running   3 minutes ago    3                 ad6ee2217602

Running inspect provides more info about what containers are running in that pod and their IDs.

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.Containers'
[
{
"id": "ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5",
"state": "running"
},
{
"id": "ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349",
"state": "running"
},
{
"id": "f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26",
"state": "running"
}
]

As mentioned in the beginning of this post, pods have an infra container that sleeps to keep the pod running. This is included in the full output if above didn’t filter with jq. The infra container is given in the state information and it matches the INFRA ID column.

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.State.infraContainerID'
"ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5"

That container can be viewed in the container list using --filter.

dan@host:~/Projects/example_wordpress$ podman container list -a --filter id=ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5
CONTAINER ID  IMAGE                 COMMAND  CREATED         STATUS             PORTS                 NAMES
ad6ee2217602  k8s.gcr.io/pause:3.1           5 minutes ago   Up 5 minutes ago   0.0.0.0:8000->80/tcp  98810f0d9df2-infra

Cool. Remember that container is running with the intention of keeping the pod alive even if no services are. That can be seen in action. Note for those unfamiliar, you can specify just the first few characters of the SHA to identify a container, rather than using the full SHA.

Lets kill our containers that aren’t infra.

dan@host:~/Projects/example_wordpress$ podman container stop ae40264
ae402644bb0a3c8487b6a3efcc510f4454b4faea4086f856520f6f27724c7349

dan@host:~/Projects/example_wordpress$ podman container stop f3fa68
f3fa682c5a7ab8dee19888acbf714c752cf7688657c9c161a20a951894491d26

And take another look to make sure they’re exited.

dan@host:~/Projects/example_wordpress$ podman-compose ps
podman ps --filter label=io.podman.compose.project=example_wordpress
CONTAINER ID  IMAGE                               COMMAND               CREATED         STATUS                     PORTS                 NAMES
ae402644bb0a  docker.io/library/wordpress:latest  apache2-foregroun...  5 minutes ago   Exited (0) 23 seconds ago  0.0.0.0:8000->80/tcp  example_wordpress_wordpress_1
f3fa682c5a7a  docker.io/library/mysql:5.7         mysqld                5 minutes ago   Exited (0) 4 seconds ago   0.0.0.0:8000->80/tcp  example_wordpress_db_1

Now view the pod to see if it’s still running. It should be (and is) thanks to the infra container.

dan@host:~/Projects/example_wordpress$ podman pod list

POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Running   5 minutes ago   3                 ad6ee2217602

And check out the containers via inspect on the pod again. Only one is running (filtered the output with jq, but it would show the container IDs as well).

dan@host:~/Projects/example_wordpress$ podman pod inspect example_wordpress | jq '.Containers[].state'
"running"
"exited"
"exited"

Now kill the infra container…

dan@host:~/Projects/example_wordpress$ podman container stop ad6ee
ad6ee22176020c5cfa93e3e0bd740a5e147781e22711784ae341978ef05339a5

…and the pod is finally exited.

dan@host:~/Projects/example_wordpress$ podman pod list
POD ID         NAME                STATUS    CREATED          # OF CONTAINERS   INFRA ID
98810f0d9df2   example_wordpress   Exited    5 minutes ago    3                 ad6ee2217602

Conclusion

Fun stuff all around. Thanks to podman being installed on fresh Fedora Workstation, and not requiring elevated privleges and a daemon, it’s a great way to get in digging around and using containers. Having almost the exact same functionality and parameters to docker makes it easy to transfer skill from podman to docker or the other way around.

Mailx for cron email

Cron defaults to sending job outputs to the owner’s mail, or the mail set in a MAILTO variable, or direct to syslog when sendmail is not installed. If the server does not have a mail server running, or there are issues such as the server being in a network or configured to specifically not send email, or is unable to send email to a particular server or service, this can cause a problem. In order to get around the issue of mail not being accepted by some third parties as I described in a previous post, emails sent by cron can instead use an Simple Mail Transport Protocol (SMTP) client to send through an external Mail Transfer Agent (MTA).

mailx

After a bit of searching, I found mailx provides a method for connecting to an external SMTP server with simple configuration. According to the man page, mailx is an intelligent mail processing system [...] intended to provide the functionality of the POSIX mailx command, and offers extension for MIME, IMAP, POP3, SMTP and S/MIME.

Installation

Installation was completed on a CentOS 7 VPS instance. mailx is available in the base repository and can be installed with a simple yum command

# yum install mailx

mailx configuration

Installation creates a default /etc/mail.rc file. You can then review the man page via man mailx to review further configuration options. Since the plan is to use it for SMTP, searching for smtp provides relevant options.

I’m using Gmail, and the documentation from Google for email client configuration provided the required SMTP host:TLS-Port combination of smtp.gmail.com:587.

For the smtp-auth-password, I can’t use my own password since I’ve got 2-Step-Verification enabled on my account. The server simply wouldn’t be able to send email if I had to verify it and provide a code each time. Gmail allows a method around this of using App Passwords for email clients that cannot use two factor authentication. Creating an app password is just a couple of steps. Each server or client using an App Password should have its own unique password. A unique app password for each application requiring one helps to provide logs of its use, as well as easily revoking the app password if needed.

We can test our configuration as we go along with the following command:

# echo "mailx test" | mailx -s "Test Email" <EMAIL_ADDRESS>

The first round of doing that gave an error:

# smtp-server: 530 5.7.0 Must issue a STARTTLS command first. 207-v6sm21173418oie.14 - gsmtp
"/root/dead.letter" 11/308
. . . message not sent.

Easy enough of a resolution, another look at the man page or quick grep shows us that we need to include smtp-use-starttls

# man mailx | grep -ie starttls | grep -i smtp
       smtp-use-starttls
              Causes mailx to issue a STARTTLS command to make an SMTP session SSL/TLS encrypted.  Not all servers support this command; because of common implementation defects, it cannot be automatically
              There  are  two  possible methods to get SSL/TLS encrypted SMTP sessions: First, the STARTTLS command can be used to encrypt a session after it has been initiated, but before any user-
              related data has been sent; see smtp-use-starttls above.  Second, some servers accept sessions that are encrypted from  their  beginning  on.  This  mode  is  configured  by  assigning

After updating the configuration, I found another error.

# Missing "nss-config-dir" variable.
"/root/dead.letter" 11/308
. . . message not sent.

To resolve that, I just looked for an nss* in /etc/ (from knowing that SSL information/certs are located there) and added that in the configuration.

# find /etc -type d -name "nss*"
/etc/pki/nssdb

Then I got yet another error:

# Error in certificate: Peer's certificate issuer is not recognized.
Continue (y/n)? SSL/TLS handshake failed: Peer's certificate issuer is not recognized.
"/root/dead.letter" 11/308
. . . message not sent.

Time for a bit more sleuthing. For whatever reason, the certificate issuer was not recognized and asked for manual intervention. After some searching around I figured it might be due to Google’s new(ish) CA, but trying to add it to the PKI trusted CAs directly didn’t help. Eventually I found a page for adding these certs directly, but in order to just get the configuration running I opted for laziness and to set ssl-verify to ignore, with the intention of adding this as an ansible role at a later point.

Finally, we have the configuration below.

# cat /etc/mail.rc
set from=<YOUR_EMAIL_ADDRESS>
set smtp-use-starttls
set nss-config-dir=/etc/pki/nssdb/
set ssl-verify=ignore
set smtp-auth=login
set smtp=smtp://smtp.gmail.com:587
set smtp-auth-user=<YOUR_GMAIL_USER>
set smtp-auth-password=<YOUR_APP_PASSWORD>

Running the testing command with these configuration settings results in a new email showing up in our inbox.

cron configuration

In order of for cron to use mailx, we need to do two things. First, cron will only send mail if the the MAILTO is set. We can add that directly into crontab with crontab -e, and adding the MAILTO variable. After, we’ll see it included in crontab -l.

And to test this, we should set up a cron job that provides output (also using crontab -e)

# crontab -l
MAILTO="<YOUR_EMAIL>"
* * * * * /usr/sbin/ip a

We also need to set crond to use mailx by editng the crond configuration to specify using /usr/bin/mailx to send mail, with the -t flag sent to mailx to use the To: header to address the email. After editing /etc/sysconfig/crond, restart crond.

# cat /etc/sysconfig/crond 
# Settings for the CRON daemon.
# CRONDARGS= :  any extra command-line startup arguments for crond
CRONDARGS=-m "/usr/bin/mailx -t"
# systemctl restart crond

Testing configuration

The crontab should now send the output of ip a to <YOUR_EMAIL> every minute. Once you’ve verified, be sure remove that job to prevent flooding your inbox.

If you don’t see a new email, take a look at the system logs to see entries from the crond service in reversed order (newest entries first).

# journalctl -r --unit crond

Because of the certificate issue noted above, and because mailx strips headers before sending mail, the following output may be included in the journald logs even on a successful mail.

Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Error in certificate: Peer's certificate issuer is not recognized.
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <USER=root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <LOGNAME=root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <PATH=/usr/bin:/bin>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <HOME=/root>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <SHELL=/bin/sh>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <MAILTO=<YOUR_EMAIL>>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <LANG=en_US.UTF-8>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/0>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "X-Cron-Env: <XDG_SESSION_ID=71363>"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Precedence: bulk"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Auto-Submitted: auto-generated"
Sep 03 19:35:01 <YOUR_HOST> crond[12378]: Ignoring header field "Content-Type: text/plain; charset=UTF-8"

And looking at the email source we should see something like the following (note, I did not include all output in the example below):

Return-Path: <YOUR_EMAIL>
Received: from <YOUR_HOST> ([<YOUR_IP_ADDRESS>])
        by smtp.gmail.com with ESMTPSA id <REDACTED>
        for <YOUR_EMAIL>
        (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 03 Sep 2018 12:35:01 -0700 (PDT)
Message-ID: <MESSAGE_ID>
From: "(Cron Daemon)" <YOUR_EMAIL>
X-Google-Original-From: "(Cron Daemon)" <root>
Date: Mon, 03 Sep 2018 19:35:01 +0000
To: <YOUR_EMAIL>
Subject: Cron <root@YOUR_HOST> /usr/sbin/ip a
User-Agent: Heirloom mailx 12.5 7/5/10
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
...

Now we can be sure our cron jobs mail us through an external SMTP server that will successfully deliver to our third party service. And with mailx configured we can easily add an email component for any scripts we might want to run.

Of note, Google App suite does provide access to a SMTP relay that specifically that allows sending emails to email addresses either inside or outside of your domain. There are some limitations on the number of emails that can be sent based on the number of licenses in your account, but for my purposes and imposed limits, configuring mailx was a suitable solution.

Migrating Email Providers

Self hosting email is great, until it isn’t. There’s plenty of options on where to get your email whether you’re bringing your own domain or not. Moving from a self hosted mail server to a managed email (or between two managed services) is primarily just DNS changes. This is a quick summary of what in DNS you need to change, and why.

Ups and Downs of self hosting email.

I’ve been self hosting my email for quite a while, but it was always in the back of my mind that it would be worth offloading to avoid the inevitable maintenance and possible downtime. Really, once I had email setup, there wasn’t much maintenance outside of upgrades and reboots. Sure, there was extra work for DKIM, SPF, and the occasional log review to dial in Fail2Ban, but outside that it all just worked. The good thing about email is that even if the server is down for a bit, the sender will try again for some period of time. So I don’t consider a bit of downtime here and there as an issue.

Catch-All

Because I’m the only user on my domain, I’ve gotten into the habit of handing out an email address relevant to the company/business/use, so that I can find out who gave out or lost my email address, and set up filtering easier. For example, if I’m handing out my email address to a salesman at a car dealer in Omaha, I would give them the email address of omaha-cardealer@my-domain.com. This tends to get a confused response (many times I’ve been asked “Do you work for us?”), but a quick blurb of “Its for mail forwarding” is enough for people to lose interest. All it takes to do this is to enable a catch-all email address (see your host/server configuration) and then handle mail forwarding based on the To: field.

Logs

Having access to your own maillogs are pretty handy. Let’s face it, if you’re self hosting anything, you’re going to be digging into logs at some point. But if you didn’t enjoy that at least a bit you wouldn’t be self hosting. How great is it to dial in your Fail2Ban, or manually drop the IP that won’t stop knocking on your server?

Someone didn’t get an email? A quick search and you can provide that their email server got it or didn’t. But, that doesn’t mean much to a lot of people, and unless you’re emailing another self-hoster, providing the log information or even just the general “your server got it” doesn’t mean a whole lot. Telling your email recipient that their mail server received it is effectively the same as it not being sent at all, other than maybe a push to check spam folders.

Blocks from Providers

This is the worst, and what finally got me to stop self hosting. Running a mail server on a VPS runs into a whole host of problems, even if your configuration (DNS, DKIM, SPF, and DMARC) is set up perfectly. On creation of a VPS, there’s no way to know what it was used before you were allocated that IP. It may have been used and flagged as a spammer before you got it. Even if not that IP, another IP (or multiple) in your new network block may have been flagged resulting in a provider not accepting email from any IP address within that block. Even if everything is fine on creation of the server, this can change at any time.

Diagnostic-Code: smtp; 550 5.7.1 Unfortunately, messages from [YOUR.IP.ADD.RESS]
weren't sent. Please contact your Internet service provider since part of
their network is on our block list (<REMOVED>). You can also refer your
provider to http://mail.live.com/mail/troubleshooting.aspx#errors.

Typically your VPS provider will go to bat for you and work with the mail administrator where you’re getting blocked. But again, this can change at any time. Some mail providers won’t even respond with this information, and instead just silently mark your email as spam.

Where to get email service

A quick search at your favorite internet search engine provides plenty of options for hosted email. These can include anything from a Shared Hosting platform, a VPS provider with a managed email service, or one of the “big” guys such as Outlook or Gmail. Both have advantages and disadvantages. Going with a Shared or VPS host gets rid of the management aspect, but can still run into issues with blocks from other providers. Using Outlook or Gmail may cost a little more, but with the bonus of being a huge email provider and additional business tools.

There’s way more to the decision than what I’ve given above if you have specific business needs, multiple users, etc. Definitely dig a little deeper if you’re looking for more than just someone else to host your email.

How to migrate

This is quite high level as there are too many mail and DNS providers to attempt to cover anything more than the basic steps. I use Rackspace DNS, and was previously hosting my email on a Linode server. The steps are fairly simple for the migration itself.

  1. Sign up with your new email host
  2. Update DNS to the new host
  3. Enable SPF
  4. Enable DKIM
  5. Enable DMARC

Sign up for your new email host

I went with Google because of the ability to add catch-all email forwarding, and because the integration with Android was worth more to me than the access to Office features with using Outlook. Prices were comparable between the two for my use.

Update DNS to point to the new mail host

This will be provided by your new email host. DNS records for your domain use MX records to direct mail. You may have one MX record (most likely if you’re self hosting), but multiple are possible and common with specified priorties. Your new provider should tell you exactly what needs to be entered.

TTL is a very important concept with DNS here. Time To Live basically says “this record is good for X amount of time.” If your record’s TTL is 24 hours, that means that it could be up to 24 hours before your new MX records will be used and your mail be sent to the new provider. I always keep my TTLs 1 hour or less (typically 5 minutes), but just sure to not tear down your existing mail setup until at least the TTL from the previous DNS record is over.

$ dig dankolb.net MX +short
1 ASPMX.L.GOOGLE.COM.
5 ALT2.ASPMX.L.GOOGLE.COM.
10 ALT3.ASPMX.L.GOOGLE.COM.
5 ALT1.ASPMX.L.GOOGLE.COM.
10 ALT4.ASPMX.L.GOOGLE.COM.

More DNS – SPF

Sender Policy Framework, or SPF is a DNS record that essentially states where email can be sent from. This is a method of preventing spam mail by allowing the receiving mail server to validate that email received for a domain came from a mail server that is authorized to send that email. There’s plenty of options for SPF, take a look at the documentation for more information. Your new host should provide this as well in order to mark their sending servers as valid for your domain.

$ dig dankolb.net TXT +short | grep spf
"v=spf1 include:_spf.google.com ~all"

This record states to use _spf.google.com to find the list of approved senders, and SoftFail (accept the email but mark it). The SoftFail allows email to be sent from other places without flat out rejecting (such as directly from a WordPress installation).

Even More DNS – DKIM

DomainKeys Identified Mail or DKIM is a method for identifying email messages by signing and verifying based on the public key provided in the domain’s DNS. This signs the email with a header field that provides a hash of the message and DNS record to query for the public key, as well as additional information. This provides to verification that the email was generated from an authorized server (or at least one that had the private key!). Again, your host would provide this, but it may require additonal steps such as the key being generated.

$ dig google._domainkey.dankolb.net TXT +short 
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgWG5dWv8XqN9UUqDsoi3F5wW8SwCahdslYbtygLHZageCccyNKM5ux7IhDG1sHKVM4ASG+jV6NvaMlxxIWMAAEQ3gQjZSVzsGzPXAdoaVJL73x+VfxuAmhpz8NPp4GLZMzGuMAH/Aq1w0IsCPzPGwd0jmZ1A8pOGPBDnlpYKAklTm+Rb/iv+8xUMy3O/jLLZj" "xK9/0Zo0+K28dB2QgozgIIABXFDSoYNUkg9yH4ag1cZmhSkaQpJ17TwLTqymHO6sw4pkm7EcIRYhPtjdmwunPEm53n6ObuT/fRK3UFNqjpRp2vb6VPdHmK8MjFZVOumsy+FMjaZaJhytoSICkNlfQIDAQAB"

A DKIM email would actually specify the DNS record to use here, in this example google._domainkey.dankolb.net, so there could multiple records. Because this is just a header value, this is compatible with an email server that does not do DKIM validation.

Interesting side note, DKIM initially failed to be enable after a new GSuite account creation giving a Error #1000 pop up message after a 500 from the backend service. After reviewing with support, I received further information and an eventual follow up email. Enabling DKIM can take some time for new GSuite accounts, and after two days I was able to enable and continue. As mentioned above, for my personal email this wasn’t an issue, but when migrating multiple users this could be a problem. This can be easily worked around (and would probably be more realistic) to set up the new account, give it some time, then modify DNS.

And Even More DNS – DMARC

Domain-based Message Authentication, Reporting, and Conformance DMARC provides a specified action to receiving mail servers based on SPF and DKIM results. Essentially, this allows the mail administrator to definitively state what to do with email that doesn’t match SPF/DKIM, and where to send aggregated mail reports.

$ dig _dmarc.dankolb.net TXT +short 
"v=DMARC1; p=none; rua=mailto:postmaster@dankolb.net"

This is not a terribly exciting example of dmarc, as p=none is providing no specific action to be done by the receiving mail server. But, to keep with the SPF record allowing emails to be received from elsewhere, this is needed to prevent rejection of the email (or it could use quarantine).

Additional (Optional) Steps

There’s plenty of other things to do depending on your email and how you use it, but the above outlines the initial setup. Further steps to consider not described here include:

  1. Reconfiguring email clients (mobile, desktop)
  2. Migrating existing emails from the old host to the new host
  3. Updating email filtering and forwarding