Blog

Integrating with Crad-Log-Watcher to block IPs due login failures for your custom web/server app

[extoc]

In order to integrate your login failures with crad-log-watcher, to have remote IP blocked automatically when some number of login failures are reached, follow the next guide.

Next steps will help you to create a login failure handler that will track and manage login failures for any given application, to block source IP in the case a configurable threshold is reached.

  1. First, you have to make your web or server application to generate a log indication about the IP that is going to be blocked.

    We recommend sending this log to syslog because it is accesible to all system users and requires no special privileges. That will simply next steps. If you decide to send this information to another log, just adapt everything as needed.

    With PHP, generating such log will be:

    // in case of login failure ::
            $remoteAddr = $_SERVER['REMOTE_ADDR'];
            $currentWeb   = $_SERVER['SERVER_NAME'];
            $loginAttempted = // point this variable to the login that was attempted and failed
            syslog (LOG_INFO, "Login failure from [$remoteAddr] access denied to [$currentWeb] with user [$loginAttempted]");
            

    This will make record a log to syslog everytime a login failure happens.
    This change will be required for every web page or server to be protected.

  2. Now, you have to create a custom handler for crad-log-watcher that reads these logs, keep track about login failures and block IPs reaching a threshold:

    For that, create a file called login_failure_handler.py with the following content (adapt as needed):

    #!/usr/bin/python
    
    from core_admin_common import support
    from core_admin_agent  import checker, watcher
    
    # database for tracking login failures
    database_path = "/etc/core-admin/client/my.watcher.sql"
    
    def init ():
    
        # notify this is child for checker notification
        checker.is_child = True
    
        # track and lock login failures
        (status, info) = watcher.create_track_login_failure_tables (database_path)
        if not status:
            return (False, "Unable to create ip_registry table, error was: %s" % info)
    
        return (True, None) # return ok init
    
    def handle_line (line, source_log):
    
        # only process log failures to block
        if "login failure from [" in line.lower () and "access denied to [" in line.lower ():
            
            # assuming log error format:
            # Jul  7 16:15:29 node04-grupodw python: Login failure from [88.99.109.209] access denied to [http://foobar.com] with user [userAccess1]
        
            source_ip_to_block    = line.lower ().split ("login failure from [")[1].split ("]")[0]
            login_failure_because = line.lower ().split ("access denied to [")[1].split ("]")[0]
            user_login            = line.lower ().split ("with user [")[1].split ("]")[0]
    
            # configure notification
            reason                   = "Access failure to %s" % login_failure_because
            fail_logins_before_block = 10
            
            # call to track user and ip
            watcher.track_and_report_login_failure (user_login, source_ip_to_block, reason, database_path, fail_logins_before_block, source_log)
    
        # end if
        
        return
    
  3. Now, place these file into /var/beep/core-admin/client-agent/log-handlers

  4. Now, inside /etc/core-admin/client/log-watcher.d, create a file that links your /var/log/syslog with the handler you have just created. For example, make that file have the following path: /etc/core-admin/client/log-watcher.d/custom-login-failures-blocker

    <log src="/var/log/syslog" handler="login_failure_handler" />
  5. After that, just restart crad-log-watcher to have it loading your login_failure_handler and configuration created with:
    /etc/init.d/crad-log-watcher restart
  6. Supervise crad-log-watcher execution to ensure everything is working with:
    # tail -f /var/log/syslog | grep crad-log-watcher
  7. Now, test your development by causing login failures to protected system. You should see logs created at /var/log/syslog.
    If you do not see login failure logs, it will not work. Review your code to ensure these logs are created.

  8. If login failure logs are created, then run the following command just to show if login failure tracking is happening:

    # sqlite3 -column -header /etc/core-admin/client/syslog.watcher.sql "select * from login_failure"

Have any questions? Contact us at support@core-admin.com (https://www.core-admin.com/portal/about-us/contact).

Posted in: Blacklist, Firewall, Security

Leave a Comment (0) →

Fixing Dovecot panic mail-index-sync-keywords.c, broken dovecot indexes

Keyword index

  • Dovecot broken indexes
  • Panic: file mail-index-sync-keywords.c:
  • dovecot_mailbox_broken_indexes

Introduction

If you have problems while accessing to a mailbox and after reviewing logs you find something like:

Jul  4 10:44:22 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:44:31 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:45:04 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:45:23 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:45:34 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:46:40 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))
Jul  4 10:48:45 claudia dovecot: imap(someuser@core-admin.com): Panic: file mail-index-sync-keywords.c: line 227 (keywords_update_records): assertion failed: (data_offset >= sizeof(struct mail_index_record))

Then, one or more dovecot index and/or cache files are broken. This happens due to moving a mailbox from different dovecot versions. It can also happens with
outdated dovecot servers.

Resolution

Core-Admin will detect these errors automatically and will report a: dovecot_mailbox_broken_indexes

Click on it and then click on “Reset dovecot indexes”.

That will clear all dovecot indexes at the destination machine, for the failing mailbox.

If you want to fix it manually, locate mailbox associated to the user and run the following command
(update it accordingly to the user mailbox failing):

>> find /var/spool/dovecot/mail/core-admin.com/someuser  -name 'dovecot*' -type f -delete" 

After that, you should have fixed the problem. No dovecot restart required.

Posted in: Core-Admin, Dovecot

Leave a Comment (0) →

Amavis failing reporting “TROUBLE in child_init_hook: BDB can’t connect db env. at /var/lib/amavis/db”

Keyword index

  • Amavis trouble in child_init_hook
  • Amavis not processing mails, consuming 100% cpu

Introduction

If you find next logs repeatedly and at the same time amavis is not working properly:

Apr 25 11:40:17 node01[30000]: (!!)TROUBLE in child_init_hook: BDB can't connect db env. at /var/lib/amavis/db: File or directory does not exists. at (eval 94) line 342.
Apr 25 11:40:17 node01[30001]: (!!)TROUBLE in child_init_hook: BDB can't connect db env. at /var/lib/amavis/db: File or directory does not exists. at (eval 94) line 342.

Follow next steps to recover service and to stop old notifications:

Resolution

This error is detected and automatically recovered by Core-Admin. If you already have Core-Admin, you might have an old version. Upgrade it:

# crad-update.pyc  -u
# crad-update.pyc  -g

Then, manually recover by running the command (or just wait core-admin to do it for you in a few minutes):

# /usr/share/core-admin/tools/mail_admin/amavis-watcher.pyc --verbose

After that, you should have service recovered. Restart agent and log-watcher to discard old notifications:

# /etc/init.d/crad-log-watcher  restart
# /etc/init.d/crad-agent  restart

Posted in: Administrador de Correo, Amavis, Mail Admin

Leave a Comment (0) →

Postfix reports “Connection timed out” but telnet works

Keyword index

  • Postfix connection cache
  • Postfix caching connection timeout
  • Postfix keeps reporting “Connection timed out”

Introduction

If you happen to find connection timeout reports from Mail delivery reports or directly from log, having a similar information like follows:

Jan 11 07:32:14 server-smtp-01 postfix/error[11965]: 62E6DEBAD: to=&lt;soporte@xxx.cl&gt;, relay=none, delay=17159, delays=17159/0.04/0/0.04, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to mailserver.xx.es[194.X.187.X]:25: Connection timed out)

..but at the same time, if you run a telnet operation it works without any problem:

telnet 194.X.187.X 25
Trying 194.X.187.X...
Connected to 194.X.187.X.
Escape character is '^]'.
220 mailserver.xx.es ESMTP

In such context, even if you restart postfix you still find out that it keeps reporting “Connection timed out”.

At the same time, you already checked that no firewall is blocking that connection: for example, root might be allowed to run telnet over such port but not “postfix” user.

Why Postfix keep on reporting “Connection timed out”

All these “Connection timed out” reported by Postfix are usually caused by the postfix connection cache state code: http://www.postfix.org/CONNECTION_CACHE_README.html

That caching code is used by Postfix to speed up operations and avoid keep on talking with slow or failing hosts. It is a performance improvement that, in some cases, might cause problems to you.

At some point in the past, the destination SMTP server started failing with a timeout. Then, that state was cached by postfix causing this problem.

How to solve “Connection timed out” when telnet works

First, try to review configuration parameter “smtp_connect_timeout” to increase it. By default 30 seconds is configured. Do not increase it more than 90 because it might cause you other problems.

After that, run the following commands to requeue and restart postfix. Remember, this will retry everything:

postsuper -r ALL
/etc/init.d/postfix restart

Posted in: Postfix

Leave a Comment (0) →

Controlling postfix content filter Amavis with Valvula (access policy delegation protocol)

Key index

Abstract

Controlling mails checked or produced by Content Filter server (Amavis) by the access policy delegation protocol (Valvula) configured at Postfix.

Introduction

Due to the way Postfix works when you configure the parameter “content filter =”, where you configure Amavis or any other Content Filter service, this makes all mail that comes in into Postfix queue, to be then sent to Amavis (or the content filter server you might have) so that mail is processed and, in turn, if everything is fine, that mail comes back to Postfix through a different internal port (typically 10025/tcp).

From here, we will assume your Content Filter service is Amavis and Valvula your policy delegation server. If it is not the case, this article is still relevant for your configuration.

Once Amavis have decided that everything is correct, that mail is sent back to postfix in a dedicated port usually declared as follow at /etc/postfix/master.cf:

# amavis connection, messages received from amavis 
127.0.0.1:10025 inet n - y - - smtpd
 -o content_filter=
 -o local_recipient_maps=
 -o relay_recipient_maps=
 -o smtpd_restriction_classes=
 -o smtpd_client_restrictions=
 -o smtpd_helo_restrictions=
 -o smtpd_sender_restrictions=
 -o smtpd_recipient_restrictions=permit_mynetworks,reject 
 -o mynetworks=127.0.0.0/8
 -o strict_rfc821_envelopes=yes
 -o receive_override_options=no_address_mappings

As you can see, any mail will be accepted on that port (10025) as long as it comes from localhost (total trust).

However, the problem we want to solve is how to deal with mails originated from within the server itself (submitted by mail/maildrop) or because a mailman installed (or some configuration produced by the Content Filter server that might produce mails by itself) to make them be also limited by your policy server (Valvula).

In that case, given the configuration above,  all mails that comes in into Amavis, are not controlled by the policy server you might have installed (in this article Valvula).

What to change to make policy server be called so your policy is applied

With this information identified, in the case it is required to filter mails sent back to postfix by the Content Filter server, you can update the following parameter:

        -o smtpd_recipient_restrictions=permit_mynetworks,reject

..to the following:

        -o smtpd_recipient_restrictions=check_policy_service,inet:127.0.0.1:3579,permit_mynetworks,reject

This is the recommended setting with Core-Admin, where the relevant part is “127.0.0.1:3579″ and has to be updated with your local settings.

This way when Amavis finishes, that mail will have to go through Valvula when it goes back to postfix.

Interactions that might cause this configuration

This change might make Valvula (or the policy server configured) to be called twice for every mail received. First when it is received and second after Amavis finishes processing mail.

Why not configure this by default

This configuration here described might be interesting in some scenarios.

For the case of dedicated mail servers this configuration is not useful/needed. We mean “dedicated mail servers” those that do not have mailing list software, web pages or any other software that might produce mail internally that might be needed to be limited, blocked or discarded.

In the other hand, this configuration might not be interesting in all those cases where this limitation can be done in origin (updating the configuration of the service producing those mails to limit) or even using postfix’s authorized_submit_users.

In short, this is not the only configuration available to limit/control mails from inside the server using policy delegation protocol.

Posted in: Administrador de Correo, Amavis, Postfix, Security, Valvula

Leave a Comment (0) →

Let’s Encrypt: the silent revolution

Let’s encrypt: the silent revolution of SSL certificates

Let's encrypt logoIf have ever bought a SSL certificate —in fact that is the old name, because now everything is TLS [2] — you will know that they have a cost and that cost is because one “trusted” organization places its “digital sign” in our certificate so that browsers, in turn, through this “trust chain”, accept this certificate.

And that is all about this SSL/TLS technology: trusting chain.

Asymmetric Cryptography: the shortest description ever

To understand why SSL/TLS is so important for today’s internet security and that characteristic “green” we see when we write https:// to access our favorite site, we have to understand what is Asymmetric Cryptography [1]  and how it relates with what we mentioned before: “trusting chain”.

Shortly, asymmetric cryptography allows to generate a public certificate and a private key so that everything that is cyphered with the public certificate will only be available for decoding with the private key (which is the one installed at the server and never will get out of there, unless security breach).

On top of this mathematical cryptography pillar lays the TLS protocols  [2] (evolved version of SSL), which provides a set of information exchange between the connecting client and the server so both parties can exchange information in a secure manner.

However, there is a “but” and it located in that part that talks about “exchanging information in a secure manner”.

The missing part to complete SSL/TLS: the trusting chain

The only thing that ensures SSL/TLS is that both parties, once completed the handshake, will be able to exchange messages without having to worry about a third party will have access to them as they transit.

However, the big problem to solve follows: how to ensure that we are talking with the server we want to and not another intercepting this communication?

Here is where the trust chain and Certificate Authorities that we all know enter, to name some: GeoTrust, Thawte, Verisign, Comodo…

What extra mile Certificate Authorities provides

With all these technical items identified, the missing piece to complete the puzzle are those companies and organizations that have reputation and due to agreements, they have managed to include their certificates –simplifying the process for the shake of clarify— into browsers so most of them recognizes them by default.

Because browsers accept and trust these certificates, everything that is signed by them will be also recognized and accepted without error.

What provides Let’s encrypt?

The foundational aim of the project is: free and secure certificates for all. But, without having to pay anything to legacy certificate authorities?

Yes. Then, where is the trick? There is no catch.

However, we have to understand its origin to better understand project’s purpose.

Let’s encrypt is an initiative backed by big companies in the tech filed that need their devices, intranets and management portals, etc, to have a certificate recognized by most of all browsers.

After all, what stop these companies to reach similar agreements with browsers’ vendors so their certificates are also supported?

Mixing a protocol to validate and deploy certificates, let’s encrypt not only provides certificates that are totally recognized and without costs: it also automates requesting and configuring certificate, freeing from this burden to system administrators.

Then, will certificate authorities disappear?

In our opinion, no. They will have to specialize to issue certificates that requires a new extra mile. At the same time they will keep issuing certificates for companies, entities and organizations. That is where Let’s Encrypt “do not want to go” (but they could).

[1] https://es.wikipedia.org/wiki/Criptograf%C3%ADa_asim%C3%A9trica
[2] https://es.wikipedia.org/wiki/Transport_Layer_Security

Posted in: Let's Encrypt, Security, SSL/TLS

Leave a Comment (0) →

How Debian fixed CVE-2016-1247, NGINX log root escalation

Introduction

This article explores CVE-2016-1247 exploit, how it was fixed by Debian and what leasons we can extract from it to even go further to protect/secure your systems.

This article also applies to CVE-2016-6664-5617, MySQL root escalation, though it is not a Debian especific issue, it is the same concept failure at the mysql wrapper that allowed the PoC author (Dawid Golunski) to create a working exploit.

How Dawid Golunski’s PoC works (CVE-2016-1247 background)

Recently Dawid Golunski (https://legalhackers.com/advisories/Nginx-Exploit-Deb-Root-PrivEsc-CVE-2016-1247.html) reported a working PoC that successfully escales from www-data (default nginx user) to full root account power. Fiu!

Taking a closer look into the bug, reducing the steps taken by Dawid to break the system and skipping some details for brevity, these are the following:

  1. First, it starts with the assumption (not very hard to achieve) that you already own/control www-data account by controlling a .php file, similar cgi, or just the wordpress administration account to that web, or a similar mechanism, so you can upload files to run arbitrary commands.
  2. From there, Dawid discovered that log rotation made by logrotate setups the following permissions every day it rotates:
         /var/log/nginx/*.log
         create 0640 www-data adm

    That is, logrotate will rotate and “create” (that’s the point) an empty file with www-data:adm permissions. It looks harmless. This is the devil in the detail.

  3. From there, the rest is history: Dawid’s PoC removes /var/log/nginx/error.log and creates a link to /etc/ld.so.preload:
        rm /var/log/nginx/error.log
        ln -s /etc/ld.so.preload /var/log/nginx/error.log

    NOTE: for those wondering, “what the heck is that file for?”

    Basically that file allows to configure “libraries” to “preload” before any other library before launching the binary.

    In essence, it is a mechanism that allows “intercepting” symbols/functions to replace its code by yours. It can be used as a mechanism “for external fix” without binary modification, but, as many mechanisms, it can be used for evil.

    For more information, see: http://man7.org/linux/man-pages/man8/ld.so.8.html

  4. Once Dawid’s PoC do that link, it only has to wait for the next day for log roration to happen and let the system itself “open the door for you” by “setting www-data:adm” to /etc/ld.so.preload.
    NOTE: here is when you can play “Carmina Buruana” in the background
  5. From there, Dawid’s PoC creates a small library that is attached to any binary loaded by your system, including code to detect “when it runs as root”….and when it does, “bang!” it creates a shell copy with setuid for you to escale without password.

But, hey, you started saying “how Debian fixed” this problem?

Right, we wanted to go through how PoC works to better understand fix introduced and what we can learn from it.

Going back to the solution, if we take a look at Debian’s changelog for this, what they have done is to “secure” permissions for that log to be owned by “root:adm” rather than “www-data:adm”:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842295

    +  * debian/nginx-common.postinst:
    +    + CVE-2016-1247: Secure log file handling (owner &amp; permissions)
    +      against privilege escalation attacks. /var/log/nginx is now owned
    +      by root:adm. Thanks ro Dawid Golunski for the report.
    +      Changing /var/log/nginx permissions effectively reopens #701112,
    +      since log files can be world-readable. This is a trade-off until
    +      a better log opening solution is implemented upstream (trac:376).
    +      (Closes: #842295)

Certainly, having “/var/log/nginx” be owned by root:adm makes impossible for www-data to remove /var/log/nginx/error.log and then trick “logrotate” to create a /etc/ld.so.preload owned by www-data (which is crux).

However, are there more leassons we can learn to better secure the system against this kind of log rotate escalation? Keep on reading.

How services should handle logs (advices for developers and system administrators)

One of the problems that this nginx service setup has is that it does not separate log handling from log production. By making nginx service to handle log directly, instead of using syslog, it creates a permission problem that cannot be easily solved/secured.

If you separate log handling (so syslog handles all logs produced by nginx), you can secure all logs (owned and accessible by root:adm), rotate them, etc, without worrying about the user/users (hopefully low permission users) can do. They can only attack files owned by that user (hopefully only web files) which are not rotated.

See https://nginx.org/en/docs/syslog.html (to start using syslog for your nginx setup)

Service should never own log
So a general conclusion we can derive from this is: never let a log be owned by the service producing it, otherwise, it can be used to escale using exactly the same mechanism as Dawid’s PoC did.

As an example, the following is wrong:

   service-user:adm   640        # bad/weak configuration

…and a good definition is:

  root:adm  640   # This is the good/recommended configuration because it prevents user
                  # deleting this log and creating a link to sensitive files.

But wait, what do I do with clamav, roundcube, mysql, dovecot (to name some)?

Some services comes with a default “package setup” that do not allow changing this log user to “root” because these services need to be able to open and write to these logs.

In any case, all these services CAN be configured to use syslog and you should consider configuring your system to do so.

In fact, Dawid Golunski also discovered “exactly” the same type of failure in MySQL packages, where a log escalation to root is possible (https://legalhackers.com/advisories/MySQL-Maria-Percona-RootPrivEsc-CVE-2016-6664-5617-Exploit.html ).

Default log configuration might not be good
This article tries to raise awareness about default log configuration. Linux distributions provides you a default configuration but you have to carefully review how log is handled, rotated and owned.

First conclusión: always use log service separation (if security is important for you)

That’s why it is very important to concentrate log reporting/handling to syslog so you can separate service from log handling (which solves all these problems).

Again, if you make your system service to run with a low level permission user (i.e.: mysql, www-data, clamav,..), and log reporting is handled and written by a separate daemon service (rsyslog, for example), you can safely create “lograte” configurations that are “always” safe and do not depend on “wrapper script failures” or “packiging problems” that might end up making very controversial decisions (more about this later).

Second conclusion: why wait for the problem to happen

Because this is life and you cannot control how packaging is done, how supervision wrappers are written, etc, you can block these kind of attacks by doing:

   touch /etc/ld.so.preload
   chown root:root /etc/ld.so.preload # you might be already hacked
   chmod 644 /etc/ld.so.preload       # i hope not, 
   chattr +i /etc/ld.so.preload

This way you make sure file exists and cannot be updated (even for the root! unless the attacker already owns root to remove the +i flag).

With this setting you completely disable Dawid’s PoC and make it more dificult to allow log rotate escalation techniques.

Third conclusion: logrotate must be constantly checked

The “logrotate” service is part of the scheme to attack the system by creating “weak” configurations that “rotates” using low privilege users like following (Debian example):

  /etc/logrotate.d/mysql-server:	create 640 mysql adm

As shown in https://legalhackers.com/advisories/MySQL-Maria-Percona-RootPrivEsc-CVE-2016-6664-5617-Exploit.html, this configuration can be exploited (log escalation from mysql user). The best thing to do is to make MySQL service to use syslog to handle all log reporting and update that file to make log owner system to be:

   /etc/logrotate.d/mysql-server:	create 640 root adm

Knowing this, you can use the following command to review those logrotate configurations that are possible breaches in your system:

   find /etc/logrotate.d /etc/logrotate.conf -type f -exec grep -H create {} \; | grep -v "create 640 root adm"

Fourth conclusion: you cannot escape from this

You might be thinking “well, this is something internal..” or “I can handle it”. Maybe. In any case, let’s take a closer look into what “really” did Debian to fix this issue by looking at the changelog:

  nginx-common (1.6.2-5+deb8u3) jessie-security; urgency=high

  In order to secure nginx against privilege escalation attacks, we are
  changing the way log file owners &amp; permissions are handled so that www-data
  is not allowed to symlink a logfile. /var/log/nginx is now owned by root:adm
  and its permissions are changed to 0755. The package checks for such symlinks
  on existing installations and informs the admin using debconf.

  That unfortunately may come at a cost in terms of privacy. /var/log/nginx is
  now world-readable, and nginx hardcodes permissions of non-existing logs to
  0644. On systems running logrotate log files are private after the first
  logrotate run, since the new log files are created with 0640 permissions.

   -- Christos Trochalakis   Tue, 04 Oct 2016 15:20:33+0300

That is, Debian has taken the path of limiting link creation but, at the cost of allowing all users accessing that directory. In fact, during package upgrade/installation they attempt to detect possible hacking links by reporting to the user:

  Template: nginx/log-symlinks
  Type: note
  _Description: Possible insecure nginx log files
    The following log files under /var/log/nginx directory are symlinks
    owned by www-data:
   .
   ${logfiles}
   .
   Since nginx 1.4.4-4 /var/log/nginx was owned by www-data. As a result
   www-data could symlink log files to sensitive locations, which in turn
   could lead to privilege escalation attacks. Although /var/log/nginx
   permissions are now fixed it is possible that such insecure links
   already exist. So, please make sure to check the above locations.

As you can see, “Yes”, the security update allows all system users to be able to access these logs and “Yes”, the security update do not fixes hackings in place, you have to review them any way.

This might (and it is) interesting but, for the scope of this article, as you can see, if you don’t fix your system setup to “separate log handling from log production” you will have to live with very bad decisions that otherwise can be easily solved (and are not Debian’s responsability, by the way).

How Core-Admin handles and mitigates CVE-2016-1247 and CVE-2016-6664-5617

With all these details explained, Core-Admin does two things to mitigate this kind of “root log escalation” attacks:

  1. Renamed process checker ensures, automatically, that /etc/ld.so.preload file is protected. It also reports any change to that file.
  2. It has a “log” checker that ensures logrotate configurations are working and has known (or at least accepted) ownership declarations that are secure, reporting any insecure configuration that might lead to problems.

Posted in: Core-Admin, Debian, LogRotate, Security

Leave a Comment (0) →

KB 22092016-001 : Fixing error message: The requested URL /cgi-bin/php-fastcgi-wrapper/index.php was not found on this server.

Sympton

If you get the following error when accesing to a website created by Core-Admin panel, or something similar:

The requested URL /cgi-bin/php-fastcgi-wrapper/index.php was not found on this server.

Then, it is possible that you created a website with custom option that has imported php-engine setting from other or site.com/bin directory was lost.

Affected releases

All releases may suffer this problem. It’s not a bug but a wrong custom configuration.

Background

The problem is caused because, somehow, core-admin was not able to create all PHP structures needed to run this site with a different php engine.

Solution

To solve this, follow these general steps:

  1. Disable custom configuration and let core-admin control site.conf apache2 configuration. For that go to WebHosting management tool, then click on custom site configs, find there site affected, copy into a temporal document your custom settings (to restore them later) and disable custom configuration.
  2. After that, select the right php engine you want under “PHP engines” section.
  3. Then, enable again custom site configuration (if needed) and restore your custom settings.

Posted in: Apache2, Core-Admin Web Edition, PHP

Leave a Comment (0) →

Configuring Let’s encrypt for Core-Admin panel’s certificate

Configuring Let’s encrypt for Core-Admin panel’s certificate

The following short guide will give you tips on how to configure let’s encrypt certificate for your Core-Admin web administration panel. That is, the certificate used by the panel to secure all comunication between your web browser and the Core-Admin server.

These indications depends on the current status of your Core-Admin installation and your preference about doing it from console or using the web panel.

Having a working Core-Admin server: upgrade to let’s encrypt certificate

If you have a working Core-Admin with web access, you can install “Let’s encrypt Management” application and then use the specific option to request and configure a Let’s encrypt certificate for your local server. Here is how:

After you have installed the tool (or if you already have it), open the tool, and follow these steps:

Let's encrypt management -> Actions -> Certificate for Core-Admin server  (follow instructions from there)

Having a working Core-Admin server with let’s encrypt already deployed: console command

In the case you are already using Core-Admin with let’s encrypt tool, you can use the following command to request, install and reconfigure your core-admin server with a let’s encrypt certificate:

>> crad-lets-encrypt.pyc -s <your-contact-email>

Configuring let’s encrypt certificate just after finishing Core-Admin installation using core-admin-installer.py

In the case you have just installed core-admin, you can use the following command to install Let’s encrypt application, Certificate manager and request the certificate for your core-admin server:

>> cd /root
>> wget http://www.core-admin.com/downloads/core-admin-installer.py
>> chmod +x core-admin-installer.py
>> ./core-admin-installer.py --core-admin-le-cert=<your-contact-email>

The difference between this command and crad-lets-encrypt.pyc is that the later is only available when you already have Let’s encrypt management tool installed. Otherwise crad-lets-encrypt.pyc will not be available.

Posted in: Administration, Certificates, Core-Admin, Let's Encrypt, Security, SSL/TLS

Leave a Comment (0) →

Updating notification time for mailbox quota exceeded

Inside Core-Admin, with the Mail admin app, you configure a notification that is sent when mailboxes are overquota (admin notification) but also you can make the system to send a quota notification to the end user directly.

For that, open Mail Admin app and go to the quota notification options as shown in the following video:

However, in the case you want to change when are those quota notified and the frequency, you will have to:

  1. Update cron specification locateYOUTUBE URLd at the following file /etc/cron.d/crad-mail-quotas to adjust it to your needs. Remenber to just update those lines running the following command: “crad-mail-admin-mgr.pyc -k -f”
  2. To avoid having the file updated by the system due to a package upgrade, add immutable flag with the following command:
    chattr +i /etc/cron/crad-mail-quotas

 

Posted in: Administrador de Correo, Administration, Core-Admin, Mail Admin

Leave a Comment (0) →
Page 2 of 5 12345