DeutschEnglish

Submenu

 - - - By CrazyStat - - -

2. February 2020

MySQL root password in Ubuntu cannot be changed

Filed under: DBMS,Linux,Server Administration — Tags: , , , , , — Christopher Kramer @ 16:08

You tried a dozen different howtos to change the root password of MySQL (5.7+) in a recent Ubuntu version (18.04+) and did not succeed? Like the one I posted in 2013?

The reason these guides do not work is simple: root access through password is disabled and thus changing the password has no effect. You can find more details here.

So how the heck can we access the database? On the console, you can login to MySQL as root without a password like this:

sudo mysql

If you need root access from another application, like phpMyAdmin, the best way is to add a new user like myroot that has all privileges and a password. To do so, login as root in the console (sudo mysql) and then create the new user like this (adjust new_password to something more secure):

CREATE USER 'myroot'@'localhost' IDENTIFIED BY 'new_password';
GRANT ALL PRIVILEGES ON *.* TO 'myroot'@'localhost' WITH GRANT OPTION;
CREATE USER 'myroot'@'%' IDENTIFIED BY 'new_passord';
GRANT ALL PRIVILEGES ON *.* TO 'myroot'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
QUIT;

Now you can login from any application using the username myroot and the password new_password . On the console:

mysql -u myroot -p
Enter password: new_password

Hope this helps somebody who is in the same situation like me.

If this made your day, please drop a comment.

Recommendation

Try my Open Source PHP visitor analytics script CrazyStat.

4. December 2019

letsencrypt certbot: how to remove some domains from an existing certificate

Filed under: Linux,Server Administration — Tags: , , , , — Christopher Kramer @ 21:44

You have a certificate with a couple of alternative domain names added to it, but want to get rid of some of them while keeping others? The parameter --renew-with-new-domains is what you want. This is how you use it:

certbot certonly --cert-name "maindomain.com" --renew-with-new-domains -d "maindomain.com,alternative1.com,alternative2.com"

This will renew the certificate maindomain.com with the domains maindomain.com, alternative1.com and alternative2.com. Any other domains that had been included in the certificate before will not be included in the new certificate. If you execute it like this, it will ask you for the verification method and parameters interactively. Alternatively, you can also configure this with parameters like -a webroot --webroot-path "/var/www/” .

8. November 2019

rspamd and redis: Random Connection refused errors

Filed under: Server Administration — Tags: , , , , , — Christopher Kramer @ 19:35

For some time, I am using rspamd as Spamfilter. By the way, I can really recommend it over Spamassasin / Amavisd-new solutions. I configured rspamd to use a redis database for the bayes filter, neural network etc.

Everything worked well until after some time, probably due to an update, it started spitting out Connection refused errors like these:

2019-11-07 06:25:10 #32043(normal) rspamd_redis_stat_keys: cannot get keys to gather stat: Connection refused
2019-11-07 06:25:15 #32042(normal) <4n3xhp>; lua; neural.lua:855: cannot get ANN data from key: rn_default_default_t66yaxz4_0; Connection refused
2019-11-07 06:25:15 #32044(normal) <4n3xhp>; lua; neural.lua:1101: cannot get ANNs list from redis: Connection refused
2019-11-07 06:25:15 #32040(controller) <4n3xhp>; lua; neural.lua:1135: cannot exec invalidate script in redis: Connection refused

Not every connection to redis failed, but quite a lot. This is a Debian Stretch system with a local redis 3.2.6 database and rspamd 2.1 running on the same host. So network problems couldn’t be it. The redis logs did not even mention the failed connections, the MONITOR command of redis also did not show anything for the refused connections. Restarting redis didn’t change anything, but restarting rspamd solved the problem for about an hour, when the problem started again. I tried adjusting the linux open files limit, the number of maximum redis connections and timeout settings without any luck.

Finally, I found the setting that solved the problem. My redis configuration in /etc/rspamd/local.d/redis.conf had looked like this:

password = "somePassword";
write_servers = "localhost";
read_servers = "localhost";

Changing it to this solved the problem:

password = "somePassword";
write_servers = "127.0.0.1:6379";
read_servers = "127.0.0.1:6379";

The problem seems to be that localhost both resolves to an IPv4 address and an IPv6 address. Redis, however, is configured to only listen on the IPv4 address in /etc/redis/redis.conf:

bind 127.0.0.1

Rspamd seems to sometimes use the IPv4 address, which works, and sometimes the IPv6 address, which doesn’t. This is why the problem seems to occur randomly.

Hope this helps somebody to find the issue faster. If this saved your day, please drop a comment.

29. August 2018

phpBB: Upgrade fails (timeout) – solution and how to update on the CLI

Filed under: PHP,Server Administration — Tags: , , , , , , , — Christopher Kramer @ 10:46

So if you are getting a Timeout when updating phpBB to 3.2.2, it tells you to either increase the maximum time limit or do the update on the CLI.

How to update on the CLI

Of course, this requires that you have access to the CLI.

  • On the CLI, go into the install folder of phpBB
  • Create a file named config.yml that contains this:
    updater:
        type: db_only
  • Make sure the above file does not end with newlines
  • Run:
    php ./phpbbcli.php update config.yml

Fixing the update timeout problem

In my case, the CLI update gave this error:

PHP Fatal error:  Call to a member function fetch_array() on resource in [...]/install/update/new/phpbb/db/migration/data/v32x/fix_user_styles.php on line 42

This is a bug in the migration script. The most easy way to fix it, is to go into your config.php and change the $dbms from mysql to mysqli. This is recommended anyway.

If this is not possible for you, open the mentioned file and search for:

$enabled_styles = $result->fetch_array();

And replace this with:

$enabled_styles = $this->db->sql_fetchrowset($result);

Thanks to RMcGirr83 and Marc on this thread.

9. August 2018

Debian Unattended Upgrades: Upgrade Third Party mono packages (sury.org, icinga)

When using the official third-party repository of mono in Debian, unattended Upgrades will not upgrade the mono packages unless you allow the origin. To do so, edit your /etc/apt/apt.conf.d/50unattended-upgrades , which may look like this:

Unattended-Upgrade::Origins-Pattern {
      "o=Debian,n=${distro_codename}";
      "o=Debian,n=${distro_codename}-updates";
      "o=Debian,n=${distro_codename}-proposed-updates";
      "o=Debian,n=${distro_codename},l=Debian-Security";
};

To also update mono packages on Debian Stretch, add XamarinStretch as Origin:

Unattended-Upgrade::Origins-Pattern {
      "o=Debian,n=${distro_codename}";
      "o=Debian,n=${distro_codename}-updates";
      "o=Debian,n=${distro_codename}-proposed-updates";
      "o=Debian,n=${distro_codename},l=Debian-Security";
      "o=XamarinStretch";
};

If you use another Debian version or Distribution, like Jessie or Ubuntu, search for the file /var/lib/apt/lists/download.mono-project.com_repo_*_InRelease and check the “Origin: ” line and adjust the origin in the config file accordingly.

Repositories with empty Origin like packages.sury.org

If you find that the Origin of the packages in the repository is not given, then you can also tell unattended upgrade to select them based on the given site. For example, this works for the packages from sury.org:

"site=packages.sury.org";

Icinga repository

The packages.icinga.com Repository has the Origin “debian icinga-stretch”, so this is what you need to add to /etc/apt/apt.conf.d/50unattended-upgrades:

"o=debian icinga-${distro_codename}";

Test your changes

To test if it works, you can run unattended updates manually like this (as root or with sudo):

unattended-upgrade -d

If this made your day or you still have problems, just drop a comment.

3. August 2018

iptables: Accept IP address of current ssh client

Filed under: Linux,Security,Server Administration — Tags: , , , , — Christopher Kramer @ 20:39

You have some service, e.g. webmin, that should not be accessible to the public and block access with iptables? And sometimes you connect from a client that is not yet whitelisted in iptables, and always need to look up its IP and add an iptables rule by ssh. Here is a small shell one-liner that makes your life easier:

iptables -I INPUT -p tcp -s `echo $SSH_CLIENT | awk '{ print $1}'` --dport 10000 -j ACCEPT

This just adds an accept rule to iptables that accepts requests from the IP address of the ssh client to port 10000. Of course, you need to adjust the port. You can just paste this in a bash-script, add a bash alias for it or whatever you want to access it fast.

To remove the iptables rule, just replace -I with -D:

iptables -D INPUT -p tcp -s `echo $SSH_CLIENT | awk '{ print $1}'` --dport 10000 -j ACCEPT

You can create one script or shell alias or for each one for easy access.

If this made your day, just leave a comment.

17. July 2018

Debian Stretch: Adjust the CPU priority (nice level) of systemd daemons like spamassasin, amavisd-new, clamd etc.

The good old days: init.d

When still using Debian Wheezy, I configured my spamassasin service to run at a nice level of 10 so it does not slow down apache, mysql, php etc. To do so, I noticed in /etc/init.d/spamassasin a parameter called NICE with a comment saying I should not touch this and instead, go into /etc/default/spamassassin. So I adjusted this file, and on my Debian Stretch, it still looks like this:

 

# /etc/default/spamassassin
# Duncan Findlay

# WARNING: please read README.spamd before using.
# There may be security risks.

# Change to one to enable spamd
ENABLED=1

# Options
# See man spamd for possible options. The -d option is automatically added.

# SpamAssassin uses a preforking model, so be careful! You need to
# make sure --max-children is not set to anything higher than 5,
# unless you know what you're doing.

OPTIONS="--create-prefs --max-children 5 --helper-home-dir"

# Pid file
# Where should spamd write its PID to file? If you use the -u or
# --username option above, this needs to be writable by that user.
# Otherwise, the init script will not be able to shut spamd down.
PIDFILE="/var/run/spamd.pid"

# Set nice level of spamd
NICE="--nicelevel 10"

# Cronjob
# Set to anything but 0 to enable the cron job to automatically update
# spamassassin's rules on a nightly basis
CRON=1

 

This had worked well on Debian Wheezy. Now I noticed that on Debian Stretch, the spamassasin service is running with a nice level of 0, so basically this has no effect. This is because Debian now uses systemd and these init.d files are magically transformed into systemd units and there, the nice level cannot be adjusted like this. The complete story is a lot longer and complex, but I don’t want to bother you here.

How to adjust the nice level of services on Debian Stretch

First, check the location of the unit file of your service. To do so, run:

systemctl status spamassassin.service

It will output something like:

● spamassassin.service - Perl-based spam filter using text analysis
   Loaded: loaded (/etc/systemd/system/spamassassin.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-07-17 16:07:26 CEST; 19min ago
 Main PID: 946 (/usr/sbin/spamd)
    Tasks: 3 (limit: 4915)
   CGroup: /system.slice/spamassassin.service
           ├─946 /usr/sbin/spamd -d --pidfile=/var/run/spamd.pid --create-prefs --max-children 5 --helper-home-dir
           ├─949 spamd child
           └─950 spamd child

Jul 17 16:07:23 systemd[1]: Starting Perl-based spam filter using text analysis...
Jul 17 16:07:26 systemd[1]: Started Perl-based spam filter using text analysis.

 

In the line “Loaded”, it gives the location of the unit file, in this case:

/etc/systemd/system/spamassassin.service

In your case, it is probably:

/lib/systemd/system/spamassassin.service

If it gives the path to /etc/init.d/ and says “generated” under loaded, as it does e.g. for amavisd-new, then you will probably find the unit file automatically generated by systemd-sysv-generator in one of these paths:

/run/systemd/generator.late
/run/systemd/generator.early
/run/systemd/generator

If your unit file is not already in etc, copy the file from where it is now to /etc, e.g. like this:

cp /lib/systemd/system/spamassassin.service /etc/systemd/system/

If your file is already in /etc, then just go ahead and edit this.

Now open the file /etc/systemd/system/spamassassin.service – it will look like this:

[Unit]
Description=Perl-based spam filter using text analysis
After=syslog.target network.target

[Service]
Type=forking
PIDFile=/var/run/spamd.pid
EnvironmentFile=-/etc/default/spamassassin
ExecStart=/usr/sbin/spamd -d --pidfile=/var/run/spamd.pid $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
StandardOutput=null
StandardError=null
Restart=always

[Install]
WantedBy=multi-user.target

Now adjust it so after [Service] and before [Install], it sets the nice level like this:

[Unit]
Description=Perl-based spam filter using text analysis
After=syslog.target network.target

[Service]
Type=forking
PIDFile=/var/run/spamd.pid
EnvironmentFile=-/etc/default/spamassassin
ExecStart=/usr/sbin/spamd -d --pidfile=/var/run/spamd.pid $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
StandardOutput=null
StandardError=null
Restart=always
Nice=10

[Install]
WantedBy=multi-user.target

 

Now reload the systemd-daemon:

systemctl daemon-reload

And restart your service:

systemctl restart spamassassin.service

Now check the nice level of your spamd processes, it should be 10 now.

 

If this made your day or you still have problems, please let me know in the comments.

9. June 2017

Automatically run WP-CLI as the correct user

Filed under: Linux,Server Administration,Wordpress — Tags: , , , , , , — Christopher Kramer @ 19:04

You are root, your php runs with a different user for each site / customer e.g. using PHP-FPM. And these users don’t have a shell assigned. So how can you easily run wp-cli with the correct user to avoid permission denied-problems?

This is how: Define an alias for wp in your shell config, e.g ~/.bash_profile like this:

alias wp='sudo -u `stat -c '%U' .` -s -- php /usr/local/wp-cli.phar --path=`pwd` '

This will first run stat -c %U . to get the owner of the current folder. Then it will pass this to sudo to execute the command as this user. You might need to adjust the path to the phar-file.

Relogin so the alias takes effect and have fun! 🙂

Now when you are in the folder of some site and run wp-cli with the wp command, it will always automatically run with the user that is the owner of the wordpress-folder, which should be the user assigned to this site.

This is tested on a Debian Wheezy system. On FreeBSD, you would need to adjust it slightly:

alias wp='sudo -u `stat -f '%Su' .` -s -- php /usr/local/wp-cli.phar --path=`pwd` '

Let me know if this saved your day or you still have some problems.

26. March 2017

Zimbra: fix corrupt index open_conversation in an mboxgroup MySQL-DB

Filed under: DBMS,Linux,Server Administration — Tags: , , , , , , — Christopher Kramer @ 20:50

Just wanted to upgrade a Zimbra server from 8.7.3 to 8.7.5. The upgrade always asks you whether to check database integrity. Even it was only a minor upgrade, I chose yes to be on the safe side. And it turned out the MySQL DB was indeed corrupt.

I had seen corrupt zimbra dbs a lot and the “MySQL crash recovery” guide in the zimbra wiki always helped out. But not this time.

I tried the crash recovery as explained in the wiki. When doing dumps, four mboxgroup-databases always failed because the index open_conversation of the table open_conversation was corrupt. As the guide explains, I increased innodb_force_recovery step by step from 1 to the maximum 6, but the error did not go away.

So here is what helped:

  1. Try to create the dumps as explained in the crash recovery guide. You will get errors like this:
    Dumped mboxgroup8
    mysqldump: Error 1712: Index open_conversation is corrupted when dumping table `open_conversation` at row: 0
    Dumped mboxgroup9

    This means that mboxgroup9 (not 8!) is corrupt. Write down all the mboxgroup numbers where an error appeared.

  2. Remove  innodb_force_recovery from the my.cnf if you inserted it
  3. Login as zimbra
    su zimbra
  4. Restart the mysql server
    mysql.server restart
  5. Load the MySQL account data into shell variables
    source ~/bin/zmshutil ; zmsetvars
  6. Log into MySQL using the root account
    mysql -u root --password=$mysql_root_password
  7. Open the first database that is corrupt:
    USE mboxgroup9;
  8. Repair the corrupt open_converstation table:
    OPTIMIZE TABLE open_conversation;

    Note: If this fails, check if you really removed innodb_force_recovery from the my.cnf!

  9. Go back to step 7 and open the next database that is corrupt until all have been repaired.
    Now exit the MySQL prompt:

    exit;
  10. You can now continue with the crash recovery, it should now create all dumps correctly. But if the open_conversation tables where the only corruption problem, you could also just stop here as this should have fixed the corruption. In my case, I jus started the upgrade again and let it verify message store database integrity again, and this time it completed with “No errors found”. 🙂
  11. Clean up the MySQL dumps
     rm -R /tmp/mysql.db.list /tmp/mysql.sql/

Please let me know if this helped you or if you have some additions.

26. February 2017

Postgrey: let nagios/icinga tell you what domains might need whitelisting

Filed under: Linux,Server Administration — Tags: , , , , , — Christopher Kramer @ 16:58

The problem: greylisting significantly delays mail from big mail providers with many servers

I still think greylisting with postgrey is very effective against spam. But now and then, mail from some senders get delayed for hours. This especially is the case for big mail service providers like Amazon SES, Microsoft Exchange Online Protection or Mailchimp. These providers have lots of servers and IP-addresses and the first attempt to deliver an email usually comes from a different server than the second. Thus, postgrey will not find a matching triplet and block the second attempt again. This will continue until one of the mail servers tried twice and postgrey found a matching tripled. This might take hours and thus cause significant delay of mails.

Therefore, if you want to use postgrey without delaying lots of mail for several hours, you need to whitelist these big email services with many servers. As new services come up and others go, you need some way to find out which servers try to send mail to your server and frequently get denied because no triplet is found.

The solution: Check your mail logs

To get an idea of what is happening and which servers might need whitelisting, I wrote a small command that analyzes your mail.log-files. Here it is:

grep -ohP 'reason=new, client_name=[^,.]+\.[^,.]+\.\K[^,.]+\.[^,]+' /var/log/mail.log* | sort | uniq -c | sort -nr | less

So what does this do?

It scans your /var/log/mail.log* files  for occurrences containing reason=new, which is what postgrey logs when no triplet was found. It then takes the domain of the server that is trying to send the mail to you, stripping the first subdomain, as this might vary from server to server that the mail service provider uses. It then sorts, counts and orders the result.

The result of this command might look like this (stripped after some lines):

    579 protection.outlook.com
    503 eu-west-1.amazonses.com
    113 facebook.com
     54 mcsv.net
     52 rsgsv.net
     48 mcdlv.net
     28 amazonses.com
     17 asianet.co.th

You should check especially all the lines that have a lot of occurrences. They might be domains from big email service providers that need whitelisting for the reasons explained above. For example, the mail from protection.outlook.com and amazonses.com is mail sent by customers of these mail providers that usually is not spam. Especially, the mail servers of these providers do retry delivery lots of times, so greylisting would not block spam anyway, it would just cause significant delays. So these domains should be whitelisted.

But the domains in this list might also only be domains that are used by an ISP for dynamic IP addresses that are actually sending spam. For example, asianet.co.th seems to be a big ISP from Thailand and multiple dynamic IP-addresses from this ISP tried to send spam, resulting in 17 occurrences where no triplet was found. This is mail that you want to be blocked/delayed.

Once you decided which domains you want to whitelist, you can do so by adding them in your whitelist-file (on Debian this is usually /etc/postgrey/whitelist_clients).

The entries should look like this:

/.*\.protection\.outlook\.com$/
/.*\.amazonses\.com$/
/.*\.facebook\.com$/
/.*\.booking\.com$/
/.*\.mcsv\.net$/
/.*\.rsgsv\.net$/
/.*\.mcdlv\.net$/
/.*\.mandrillapp\.com$/

Note: You should escape all dots with a backslash, as done in the example. Otherwise, a dot matches any character, so protection.outlook.com would also whitelist protectionAoutlook.com, which might be a domain that spammers register on purpose to get through servers that use inaccurate whitelists.

Automation with nagios/icinga

You could run the above command from time to time to check if new services need whitelisting. But it would be more easy, if you get notified when a new domain pops up in this list that you have not checked yet, right?

Therefore, I wrote a small nagios/icinga plugin that checks if any new domains appear on this list. It is a bit quick and dirty but does its job.

The main part is a bash script, that you could place in /usr/lib/nagios/plugins/check_postgrey_whitelist :

#!/bin/bash
log='/var/log/mail.log.1'
ignoreFile='/etc/nagios-plugins/postgrey_no_whitelist'

hosts=`grep -ohP 'reason=(new|early-retry[^,]*), client_name=[^,.]+\.[^,.]+\.\K[^,.]+(\.co|\.com)\.?[^,]+' $log | sort | uniq -c | awk '$1>=15{print $2}' | sort`;
noWhitelist=`cat $ignoreFile | sort`;

diff=`comm -23 <(echo "$hosts") <(echo "$noWhitelist") | sed ':a;N;$!ba;s/\n/ /g'`;

OK=0;
WARNING=1;
CRITICAL=2;
UNKNOWN=3;

if [ -z "$diff" ]; then
        echo "Okay, the top domains are whitelisted or ignored";
        exit $OK;
else
        echo "Whitelist or ignore these senders: $diff";
        exit $WARNING;
fi

This script ignores all domains listed in /etc/nagios-plugins/postgrey_no_whitelist  (one per line), assuming you checked that you do not want to whitelist these. So create this file as well and put the domains in that you checked and do not want to whitelist:

asianet.co.th
example.com

Now we need to define the command for this plugin:

Create the configuration where your other plugins are defined, e.g. /etc/nagios-plugins/config/postgrey_whitelist.cfg

define command {
        command_name    check_postgrey_whitelist
        command_line    /usr/lib/nagios/plugins/check_postgrey_whitelist
}

Now you need to create a service for your localhost that runs this command.

In your localhost service definition, e.g. /etc/icinga/objects/localhost_icinga.cfg , add this:

define service{
        use                             generic-service
        host_name                       localhost
        service_description             Postgrey Whitelist
        check_command                   check_postgrey_whitelist
}

Finally, make sure that nagios/icinga has read-access to your mail-logfile. You can adjust which file to check in the script above, I chose mail.log.1. So you might chown this file to the nagios user like this:

chown nagios:adm mail.log.1

Of course logrotation needs to know that new rotated files should have this owner. On a debian system, you can configure logrotate to do so in /etc/logrotate.d/rsyslog. Adjust it similar to this one:

/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
        rotate 4
        weekly
        missingok
        notifempty
        compress
        delaycompress
        sharedscripts
        postrotate
                invoke-rc.d rsyslog rotate > /dev/null
        endscript
}

/var/log/mail.log
{
        rotate 4
        weekly
        missingok
        notifempty
        compress
        delaycompress
        sharedscripts
        create 440 nagios adm
        postrotate
                invoke-rc.d rsyslog rotate > /dev/null
        endscript
}

So I removed the mail.log from the first logrotate definition and created a new one below which is exactly the same but contains an extra line create 440 nagios adm to tell logrotate which user and permission to assign the logfiles to.

You might need to adjust some stuff depending on your distro, but I hope this helps somebody to set this up.

Update 06.03.2017: Adjusted script so it does not consider second-level domains like co.uk or com.br as single senders anymore.

Older Posts »