Blog

Archive for Security

How to globally change wordpress account password for all your WordPress sites with Core-Admin

In case you need to change password for the same account across all WordPress sites in a machine, you can use the follow Core-Admin Option.
See next article that explain how to change password for the same account at all WordPress installations found in a machine with Core-admin:

https://support.asplhosting.com/t/how-to-globally-change-wordpress-user-account-password-for-all-your-wordpress-sites-with-core-admin/217

Posted in: Security, Wordpress, Wordpress Manager

Leave a Comment (0) →

How to use ipset to block large set of IPs with Core-Admin and #IPBlocker efficiently

[extoc]

Introduction to ipset with Core-Admin

In the case you want to block a large amount of IPs (more that 500 ips/networks), then you might notice that default block-by-iptables setting is not fast enough, and it tends to create a large iptables rule set with a bad performance.

Selección_334

If this is your case, here is how to configure your #IpBlocker tool to use linux kernel ipset.

Prerequisites for ipset with Core-Admin

This option is not available for Debian Lenny, Debian Squeeze and Centos 6 due to poor or missing ipset support.

How to enable it ipset with Core-Admin

#IpBlocker is prepared to switch to block-by-ipset from block-by-iptables and viceversa anytime you need it. This includes cases where the firewall is already enabled and working with a working set of blocking rules.

To enable it, just follow next steps. Open #IpBlocker tool as shown (it needs administrator permissions):

Selección_336

Then, open configuration:

Selección_337

Then, select block-by-ipset in block mode and then save. If it is not available, please, update your core-admin installation. Depending on the number of rules your machine has, it might take a few minutes to switch to ipset:

Selección_338

Operation enabled

If everything went ok, you will use #IpBlocker as usual (and the rest of the system too). No additional step is required because once it is done, it is transparent to the user and system.

Some internal details on how is used ipset with Core-Admin

Under ipset mode, core-admin install only a few rules inside iptables and ip6tables chain to link ipsets created.

>> iptables -S | grep set
-A INPUT -m set --match-set core_admin_blacklist_ipv4_net src -j DROP
-A INPUT -m set --match-set core_admin_blacklist_ipv4 src -j DROP
-A FORWARD -m set --match-set core_admin_blacklist_ipv4_net src -j DROP
-A FORWARD -m set --match-set core_admin_blacklist_ipv4 src -j DROP
-A OUTPUT -m set --match-set core_admin_blacklist_ipv4_net src -j DROP
-A OUTPUT -m set --match-set core_admin_blacklist_ipv4 src -j DROP

These “sets” are accesible running common ipset commands (do not minipulate them directly, use #IpBlocker application or crad-ip-blocker.pyc command line tool):

>> ipset list

Posted in: Blacklist, Security

Leave a Comment (0) →

Integrating with Crad-Log-Watcher to block IPs due login failures for your custom web/server app

[extoc]

In order to integrate your login failures with crad-log-watcher, to have remote IP blocked automatically when some number of login failures are reached, follow the next guide.

Next steps will help you to create a login failure handler that will track and manage login failures for any given application, to block source IP in the case a configurable threshold is reached.

  1. First, you have to make your web or server application to generate a log indication about the IP that is going to be blocked.

    We recommend sending this log to syslog because it is accesible to all system users and requires no special privileges. That will simply next steps. If you decide to send this information to another log, just adapt everything as needed.

    With PHP, generating such log will be:

    // in case of login failure ::
            $remoteAddr = $_SERVER['REMOTE_ADDR'];
            $currentWeb   = $_SERVER['SERVER_NAME'];
            $loginAttempted = // point this variable to the login that was attempted and failed
            syslog (LOG_INFO, "Login failure from [$remoteAddr] access denied to [$currentWeb] with user [$loginAttempted]");
            

    This will make record a log to syslog everytime a login failure happens.
    This change will be required for every web page or server to be protected.

  2. Now, you have to create a custom handler for crad-log-watcher that reads these logs, keep track about login failures and block IPs reaching a threshold:

    For that, create a file called login_failure_handler.py with the following content (adapt as needed):

    #!/usr/bin/python
    
    from core_admin_common import support
    from core_admin_agent  import checker, watcher
    
    # database for tracking login failures
    database_path = "/etc/core-admin/client/my.watcher.sql"
    
    def init ():
    
        # notify this is child for checker notification
        checker.is_child = True
    
        # track and lock login failures
        (status, info) = watcher.create_track_login_failure_tables (database_path)
        if not status:
            return (False, "Unable to create ip_registry table, error was: %s" % info)
    
        return (True, None) # return ok init
    
    def handle_line (line, source_log):
    
        # only process log failures to block
        if "login failure from [" in line.lower () and "access denied to [" in line.lower ():
            
            # assuming log error format:
            # Jul  7 16:15:29 node04-grupodw python: Login failure from [88.99.109.209] access denied to [http://foobar.com] with user [userAccess1]
        
            source_ip_to_block    = line.lower ().split ("login failure from [")[1].split ("]")[0]
            login_failure_because = line.lower ().split ("access denied to [")[1].split ("]")[0]
            user_login            = line.lower ().split ("with user [")[1].split ("]")[0]
    
            # configure notification
            reason                   = "Access failure to %s" % login_failure_because
            fail_logins_before_block = 10
            
            # call to track user and ip
            watcher.track_and_report_login_failure (user_login, source_ip_to_block, reason, database_path, fail_logins_before_block, source_log)
    
        # end if
        
        return
    
  3. Now, place these file into /var/beep/core-admin/client-agent/log-handlers

  4. Now, inside /etc/core-admin/client/log-watcher.d, create a file that links your /var/log/syslog with the handler you have just created. For example, make that file have the following path: /etc/core-admin/client/log-watcher.d/custom-login-failures-blocker

    <log src="/var/log/syslog" handler="login_failure_handler" />
  5. After that, just restart crad-log-watcher to have it loading your login_failure_handler and configuration created with:
    /etc/init.d/crad-log-watcher restart
  6. Supervise crad-log-watcher execution to ensure everything is working with:
    # tail -f /var/log/syslog | grep crad-log-watcher
  7. Now, test your development by causing login failures to protected system. You should see logs created at /var/log/syslog.
    If you do not see login failure logs, it will not work. Review your code to ensure these logs are created.

  8. If login failure logs are created, then run the following command just to show if login failure tracking is happening:

    # sqlite3 -column -header /etc/core-admin/client/syslog.watcher.sql "select * from login_failure"

Have any questions? Contact us at support@core-admin.com (https://www.core-admin.com/portal/about-us/contact).

Posted in: Blacklist, Firewall, Security

Leave a Comment (0) →

Controlling postfix content filter Amavis with Valvula (access policy delegation protocol)

Key index

Abstract

Controlling mails checked or produced by Content Filter server (Amavis) by the access policy delegation protocol (Valvula) configured at Postfix.

Introduction

Due to the way Postfix works when you configure the parameter “content filter =”, where you configure Amavis or any other Content Filter service, this makes all mail that comes in into Postfix queue, to be then sent to Amavis (or the content filter server you might have) so that mail is processed and, in turn, if everything is fine, that mail comes back to Postfix through a different internal port (typically 10025/tcp).

From here, we will assume your Content Filter service is Amavis and Valvula your policy delegation server. If it is not the case, this article is still relevant for your configuration.

Once Amavis have decided that everything is correct, that mail is sent back to postfix in a dedicated port usually declared as follow at /etc/postfix/master.cf:

# amavis connection, messages received from amavis 
127.0.0.1:10025 inet n - y - - smtpd
 -o content_filter=
 -o local_recipient_maps=
 -o relay_recipient_maps=
 -o smtpd_restriction_classes=
 -o smtpd_client_restrictions=
 -o smtpd_helo_restrictions=
 -o smtpd_sender_restrictions=
 -o smtpd_recipient_restrictions=permit_mynetworks,reject 
 -o mynetworks=127.0.0.0/8
 -o strict_rfc821_envelopes=yes
 -o receive_override_options=no_address_mappings

As you can see, any mail will be accepted on that port (10025) as long as it comes from localhost (total trust).

However, the problem we want to solve is how to deal with mails originated from within the server itself (submitted by mail/maildrop) or because a mailman installed (or some configuration produced by the Content Filter server that might produce mails by itself) to make them be also limited by your policy server (Valvula).

In that case, given the configuration above,  all mails that comes in into Amavis, are not controlled by the policy server you might have installed (in this article Valvula).

What to change to make policy server be called so your policy is applied

With this information identified, in the case it is required to filter mails sent back to postfix by the Content Filter server, you can update the following parameter:

        -o smtpd_recipient_restrictions=permit_mynetworks,reject

..to the following:

        -o smtpd_recipient_restrictions=check_policy_service,inet:127.0.0.1:3579,permit_mynetworks,reject

This is the recommended setting with Core-Admin, where the relevant part is “127.0.0.1:3579″ and has to be updated with your local settings.

This way when Amavis finishes, that mail will have to go through Valvula when it goes back to postfix.

Interactions that might cause this configuration

This change might make Valvula (or the policy server configured) to be called twice for every mail received. First when it is received and second after Amavis finishes processing mail.

Why not configure this by default

This configuration here described might be interesting in some scenarios.

For the case of dedicated mail servers this configuration is not useful/needed. We mean “dedicated mail servers” those that do not have mailing list software, web pages or any other software that might produce mail internally that might be needed to be limited, blocked or discarded.

In the other hand, this configuration might not be interesting in all those cases where this limitation can be done in origin (updating the configuration of the service producing those mails to limit) or even using postfix’s authorized_submit_users.

In short, this is not the only configuration available to limit/control mails from inside the server using policy delegation protocol.

Posted in: Administrador de Correo, Amavis, Postfix, Security, Valvula

Leave a Comment (0) →

Let’s Encrypt: the silent revolution

Let’s encrypt: the silent revolution of SSL certificates

Let's encrypt logoIf have ever bought a SSL certificate —in fact that is the old name, because now everything is TLS [2] — you will know that they have a cost and that cost is because one “trusted” organization places its “digital sign” in our certificate so that browsers, in turn, through this “trust chain”, accept this certificate.

And that is all about this SSL/TLS technology: trusting chain.

Asymmetric Cryptography: the shortest description ever

To understand why SSL/TLS is so important for today’s internet security and that characteristic “green” we see when we write https:// to access our favorite site, we have to understand what is Asymmetric Cryptography [1]  and how it relates with what we mentioned before: “trusting chain”.

Shortly, asymmetric cryptography allows to generate a public certificate and a private key so that everything that is cyphered with the public certificate will only be available for decoding with the private key (which is the one installed at the server and never will get out of there, unless security breach).

On top of this mathematical cryptography pillar lays the TLS protocols  [2] (evolved version of SSL), which provides a set of information exchange between the connecting client and the server so both parties can exchange information in a secure manner.

However, there is a “but” and it located in that part that talks about “exchanging information in a secure manner”.

The missing part to complete SSL/TLS: the trusting chain

The only thing that ensures SSL/TLS is that both parties, once completed the handshake, will be able to exchange messages without having to worry about a third party will have access to them as they transit.

However, the big problem to solve follows: how to ensure that we are talking with the server we want to and not another intercepting this communication?

Here is where the trust chain and Certificate Authorities that we all know enter, to name some: GeoTrust, Thawte, Verisign, Comodo…

What extra mile Certificate Authorities provides

With all these technical items identified, the missing piece to complete the puzzle are those companies and organizations that have reputation and due to agreements, they have managed to include their certificates –simplifying the process for the shake of clarify— into browsers so most of them recognizes them by default.

Because browsers accept and trust these certificates, everything that is signed by them will be also recognized and accepted without error.

What provides Let’s encrypt?

The foundational aim of the project is: free and secure certificates for all. But, without having to pay anything to legacy certificate authorities?

Yes. Then, where is the trick? There is no catch.

However, we have to understand its origin to better understand project’s purpose.

Let’s encrypt is an initiative backed by big companies in the tech filed that need their devices, intranets and management portals, etc, to have a certificate recognized by most of all browsers.

After all, what stop these companies to reach similar agreements with browsers’ vendors so their certificates are also supported?

Mixing a protocol to validate and deploy certificates, let’s encrypt not only provides certificates that are totally recognized and without costs: it also automates requesting and configuring certificate, freeing from this burden to system administrators.

Then, will certificate authorities disappear?

In our opinion, no. They will have to specialize to issue certificates that requires a new extra mile. At the same time they will keep issuing certificates for companies, entities and organizations. That is where Let’s Encrypt “do not want to go” (but they could).

[1] https://es.wikipedia.org/wiki/Criptograf%C3%ADa_asim%C3%A9trica
[2] https://es.wikipedia.org/wiki/Transport_Layer_Security

Posted in: Let's Encrypt, Security, SSL/TLS

Leave a Comment (0) →

How Debian fixed CVE-2016-1247, NGINX log root escalation

Introduction

This article explores CVE-2016-1247 exploit, how it was fixed by Debian and what leasons we can extract from it to even go further to protect/secure your systems.

This article also applies to CVE-2016-6664-5617, MySQL root escalation, though it is not a Debian especific issue, it is the same concept failure at the mysql wrapper that allowed the PoC author (Dawid Golunski) to create a working exploit.

How Dawid Golunski’s PoC works (CVE-2016-1247 background)

Recently Dawid Golunski (https://legalhackers.com/advisories/Nginx-Exploit-Deb-Root-PrivEsc-CVE-2016-1247.html) reported a working PoC that successfully escales from www-data (default nginx user) to full root account power. Fiu!

Taking a closer look into the bug, reducing the steps taken by Dawid to break the system and skipping some details for brevity, these are the following:

  1. First, it starts with the assumption (not very hard to achieve) that you already own/control www-data account by controlling a .php file, similar cgi, or just the wordpress administration account to that web, or a similar mechanism, so you can upload files to run arbitrary commands.
  2. From there, Dawid discovered that log rotation made by logrotate setups the following permissions every day it rotates:
         /var/log/nginx/*.log
         create 0640 www-data adm

    That is, logrotate will rotate and “create” (that’s the point) an empty file with www-data:adm permissions. It looks harmless. This is the devil in the detail.

  3. From there, the rest is history: Dawid’s PoC removes /var/log/nginx/error.log and creates a link to /etc/ld.so.preload:
        rm /var/log/nginx/error.log
        ln -s /etc/ld.so.preload /var/log/nginx/error.log

    NOTE: for those wondering, “what the heck is that file for?”

    Basically that file allows to configure “libraries” to “preload” before any other library before launching the binary.

    In essence, it is a mechanism that allows “intercepting” symbols/functions to replace its code by yours. It can be used as a mechanism “for external fix” without binary modification, but, as many mechanisms, it can be used for evil.

    For more information, see: http://man7.org/linux/man-pages/man8/ld.so.8.html

  4. Once Dawid’s PoC do that link, it only has to wait for the next day for log roration to happen and let the system itself “open the door for you” by “setting www-data:adm” to /etc/ld.so.preload.
    NOTE: here is when you can play “Carmina Buruana” in the background
  5. From there, Dawid’s PoC creates a small library that is attached to any binary loaded by your system, including code to detect “when it runs as root”….and when it does, “bang!” it creates a shell copy with setuid for you to escale without password.

But, hey, you started saying “how Debian fixed” this problem?

Right, we wanted to go through how PoC works to better understand fix introduced and what we can learn from it.

Going back to the solution, if we take a look at Debian’s changelog for this, what they have done is to “secure” permissions for that log to be owned by “root:adm” rather than “www-data:adm”:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842295

    +  * debian/nginx-common.postinst:
    +    + CVE-2016-1247: Secure log file handling (owner &amp; permissions)
    +      against privilege escalation attacks. /var/log/nginx is now owned
    +      by root:adm. Thanks ro Dawid Golunski for the report.
    +      Changing /var/log/nginx permissions effectively reopens #701112,
    +      since log files can be world-readable. This is a trade-off until
    +      a better log opening solution is implemented upstream (trac:376).
    +      (Closes: #842295)

Certainly, having “/var/log/nginx” be owned by root:adm makes impossible for www-data to remove /var/log/nginx/error.log and then trick “logrotate” to create a /etc/ld.so.preload owned by www-data (which is crux).

However, are there more leassons we can learn to better secure the system against this kind of log rotate escalation? Keep on reading.

How services should handle logs (advices for developers and system administrators)

One of the problems that this nginx service setup has is that it does not separate log handling from log production. By making nginx service to handle log directly, instead of using syslog, it creates a permission problem that cannot be easily solved/secured.

If you separate log handling (so syslog handles all logs produced by nginx), you can secure all logs (owned and accessible by root:adm), rotate them, etc, without worrying about the user/users (hopefully low permission users) can do. They can only attack files owned by that user (hopefully only web files) which are not rotated.

See https://nginx.org/en/docs/syslog.html (to start using syslog for your nginx setup)

Service should never own log
So a general conclusion we can derive from this is: never let a log be owned by the service producing it, otherwise, it can be used to escale using exactly the same mechanism as Dawid’s PoC did.

As an example, the following is wrong:

   service-user:adm   640        # bad/weak configuration

…and a good definition is:

  root:adm  640   # This is the good/recommended configuration because it prevents user
                  # deleting this log and creating a link to sensitive files.

But wait, what do I do with clamav, roundcube, mysql, dovecot (to name some)?

Some services comes with a default “package setup” that do not allow changing this log user to “root” because these services need to be able to open and write to these logs.

In any case, all these services CAN be configured to use syslog and you should consider configuring your system to do so.

In fact, Dawid Golunski also discovered “exactly” the same type of failure in MySQL packages, where a log escalation to root is possible (https://legalhackers.com/advisories/MySQL-Maria-Percona-RootPrivEsc-CVE-2016-6664-5617-Exploit.html ).

Default log configuration might not be good
This article tries to raise awareness about default log configuration. Linux distributions provides you a default configuration but you have to carefully review how log is handled, rotated and owned.

First conclusión: always use log service separation (if security is important for you)

That’s why it is very important to concentrate log reporting/handling to syslog so you can separate service from log handling (which solves all these problems).

Again, if you make your system service to run with a low level permission user (i.e.: mysql, www-data, clamav,..), and log reporting is handled and written by a separate daemon service (rsyslog, for example), you can safely create “lograte” configurations that are “always” safe and do not depend on “wrapper script failures” or “packiging problems” that might end up making very controversial decisions (more about this later).

Second conclusion: why wait for the problem to happen

Because this is life and you cannot control how packaging is done, how supervision wrappers are written, etc, you can block these kind of attacks by doing:

   touch /etc/ld.so.preload
   chown root:root /etc/ld.so.preload # you might be already hacked
   chmod 644 /etc/ld.so.preload       # i hope not, 
   chattr +i /etc/ld.so.preload

This way you make sure file exists and cannot be updated (even for the root! unless the attacker already owns root to remove the +i flag).

With this setting you completely disable Dawid’s PoC and make it more dificult to allow log rotate escalation techniques.

Third conclusion: logrotate must be constantly checked

The “logrotate” service is part of the scheme to attack the system by creating “weak” configurations that “rotates” using low privilege users like following (Debian example):

  /etc/logrotate.d/mysql-server:	create 640 mysql adm

As shown in https://legalhackers.com/advisories/MySQL-Maria-Percona-RootPrivEsc-CVE-2016-6664-5617-Exploit.html, this configuration can be exploited (log escalation from mysql user). The best thing to do is to make MySQL service to use syslog to handle all log reporting and update that file to make log owner system to be:

   /etc/logrotate.d/mysql-server:	create 640 root adm

Knowing this, you can use the following command to review those logrotate configurations that are possible breaches in your system:

   find /etc/logrotate.d /etc/logrotate.conf -type f -exec grep -H create {} \; | grep -v "create 640 root adm"

Fourth conclusion: you cannot escape from this

You might be thinking “well, this is something internal..” or “I can handle it”. Maybe. In any case, let’s take a closer look into what “really” did Debian to fix this issue by looking at the changelog:

  nginx-common (1.6.2-5+deb8u3) jessie-security; urgency=high

  In order to secure nginx against privilege escalation attacks, we are
  changing the way log file owners &amp; permissions are handled so that www-data
  is not allowed to symlink a logfile. /var/log/nginx is now owned by root:adm
  and its permissions are changed to 0755. The package checks for such symlinks
  on existing installations and informs the admin using debconf.

  That unfortunately may come at a cost in terms of privacy. /var/log/nginx is
  now world-readable, and nginx hardcodes permissions of non-existing logs to
  0644. On systems running logrotate log files are private after the first
  logrotate run, since the new log files are created with 0640 permissions.

   -- Christos Trochalakis   Tue, 04 Oct 2016 15:20:33+0300

That is, Debian has taken the path of limiting link creation but, at the cost of allowing all users accessing that directory. In fact, during package upgrade/installation they attempt to detect possible hacking links by reporting to the user:

  Template: nginx/log-symlinks
  Type: note
  _Description: Possible insecure nginx log files
    The following log files under /var/log/nginx directory are symlinks
    owned by www-data:
   .
   ${logfiles}
   .
   Since nginx 1.4.4-4 /var/log/nginx was owned by www-data. As a result
   www-data could symlink log files to sensitive locations, which in turn
   could lead to privilege escalation attacks. Although /var/log/nginx
   permissions are now fixed it is possible that such insecure links
   already exist. So, please make sure to check the above locations.

As you can see, “Yes”, the security update allows all system users to be able to access these logs and “Yes”, the security update do not fixes hackings in place, you have to review them any way.

This might (and it is) interesting but, for the scope of this article, as you can see, if you don’t fix your system setup to “separate log handling from log production” you will have to live with very bad decisions that otherwise can be easily solved (and are not Debian’s responsability, by the way).

How Core-Admin handles and mitigates CVE-2016-1247 and CVE-2016-6664-5617

With all these details explained, Core-Admin does two things to mitigate this kind of “root log escalation” attacks:

  1. Renamed process checker ensures, automatically, that /etc/ld.so.preload file is protected. It also reports any change to that file.
  2. It has a “log” checker that ensures logrotate configurations are working and has known (or at least accepted) ownership declarations that are secure, reporting any insecure configuration that might lead to problems.

Posted in: Core-Admin, Debian, LogRotate, Security

Leave a Comment (0) →

Configuring Let’s encrypt for Core-Admin panel’s certificate

Configuring Let’s encrypt for Core-Admin panel’s certificate

The following short guide will give you tips on how to configure let’s encrypt certificate for your Core-Admin web administration panel. That is, the certificate used by the panel to secure all comunication between your web browser and the Core-Admin server.

These indications depends on the current status of your Core-Admin installation and your preference about doing it from console or using the web panel.

Having a working Core-Admin server: upgrade to let’s encrypt certificate

If you have a working Core-Admin with web access, you can install “Let’s encrypt Management” application and then use the specific option to request and configure a Let’s encrypt certificate for your local server. Here is how:

After you have installed the tool (or if you already have it), open the tool, and follow these steps:

Let's encrypt management -> Actions -> Certificate for Core-Admin server  (follow instructions from there)

Having a working Core-Admin server with let’s encrypt already deployed: console command

In the case you are already using Core-Admin with let’s encrypt tool, you can use the following command to request, install and reconfigure your core-admin server with a let’s encrypt certificate:

>> crad-lets-encrypt.pyc -s <your-contact-email>

Configuring let’s encrypt certificate just after finishing Core-Admin installation using core-admin-installer.py

In the case you have just installed core-admin, you can use the following command to install Let’s encrypt application, Certificate manager and request the certificate for your core-admin server:

>> cd /root
>> wget http://www.core-admin.com/downloads/core-admin-installer.py
>> chmod +x core-admin-installer.py
>> ./core-admin-installer.py --core-admin-le-cert=<your-contact-email>

The difference between this command and crad-lets-encrypt.pyc is that the later is only available when you already have Let’s encrypt management tool installed. Otherwise crad-lets-encrypt.pyc will not be available.

Posted in: Administration, Certificates, Core-Admin, Let's Encrypt, Security, SSL/TLS

Leave a Comment (0) →

Updates — KB: 24032014-001: Dealing with TIME WAIT exhaustion (no more TCP connections)

The KB http://www.core-admin.com/portal/kb-24032014-001-dealing-with-time-wait-exhaustion-no-more-tcp-connections about managing time wait configuration problems reported by time wait checker has been updated to allow configuring TCP TIME WAIT recycle option (/proc/sys/net/ipv4/tcp_tw_recycle). The article also includes additional infomation about how this option relates (and may cause problems) with devices behind NATing firewalls when the server running this option is accessed from there.

The article also includes a reference to Troy Davis’ article http://troy.yort.com/improve-linux-tcp-tw-recycle-man-page-entry/ which explains in more detail how this happens.

Posted in: Administration, Firewall, KB, Security

Leave a Comment (0) →

Let’s encrypt: trusted SSL/TLS certificates for everyone

letsencrypt-128x128Let’s encrypt (http://letsencrypt.org) now is making it possible for everyone to have access to trusted certificates for free. It does it by using a client that implements the ACME protocol (https://github.com/ietf-wg-acme/acme/), which allows you to get access to the Let’s encrypt infrastructure to request and issue a certificate for your domains.

This is a very important step to secure even more the internet by making possible that, at least, all administration panels can get secured with it. We are talking about “at least administration panels” because it is still possible that you might be interested in legacy SSL/TLS certificates where it can include your contact information or, for legal or techical reasons, you might need a certificate signed by a particular vendor.

In any case, this new technology, promoted by important vendors involved in promoting the web, will provide a secure and trusted solution for many of our devices (routers, IoT “things”, appliances, etc) to have them secured with a https:// page running a SSL/TLS certificate…for free!

So, there’s no excuse anymore to protect your sensitive web pages, especially those running critical services with http:// administration panels.

Core-Admin and Let’s encrypt management application

Core-Admin now support Let’s encrypt by fully integrating it with an easy to use graphical interface that allows to easily locate web pages running at your servers, and request a certificate for them.

This new application is available from Core-Admin revision 4615. Check the following Let’s encrypt for Core-Admin manual to know more details about it: http://www.core-admin.com/portal/applications/lets-encrypt

Posted in: Administration, Certificates, Security

Leave a Comment (0) →
Page 1 of 2 12