Check certificate expiration for domains

I manage a handful of websites (around 60), and to automate my job I wrote a script to check for the expiration of SSL certificates.

Most of the sites I manage use LetsEncrypt and are self-hosted by me on Linode, but sometimes (never?) the certbot renew doesn’t work or some external hosting company decides to renew the certificate 2 days before the expiration. Why? I don’t know.

LetsEncrypt will automatically renew the certificate if less than 30 days are remaining, so this script will rarely report a problem.

So this script runs daily and warns me if some host certificate will expire soon, so I can manually check.

This script used Net::SSL::ExpireDate to check for expiration date, but it seems it doesn’t like Cloudflare certificates, so I added another function to get the certificate expiration date using openssl.

This script is available via github:

https://gist.github.com/sburlot/9a26255cc5b7d6b703fb37d40867baec

Usage: enter the list of sites by modifying the line:

my @sites = qw/coriolis.ch textfiles.com/;

and run (via crontab, after your certbot renew cron)

Helpful links:

https://prefetch.net/articles/checkcertificate.html
https://www.cilogon.org/cert-expire

#!/usr/bin/perl
# vi:set ts=4 nu: 

use strict;

use POSIX 'strftime';
use Net::SSL::ExpireDate;
use Date::Parse;
use Data::Dumper;
use MIME::Lite;

my $status = "";

my @sites = qw/coriolis.ch textfiles.com/;

my $error_sites = "";

my %expiration_sites;

################################################################################################
sub check_site_with_openssl($) {
    my $site = shift @_;

    my $expire_date = `echo | openssl s_client -servername $site -connect $site:443 2>&1 | openssl x509 -noout -enddate 2>&1`;
    if ($expire_date !~ /notAfter/) {
        print "Error while getting info for certificate: $site\n";
        $error_sites .= "$site has no expiration date\n";
        return;
    }
    $expire_date =~ s/notAfter=//g;
    my $time = str2time($expire_date);
    my $now = time;
    my $days = int(($time-$now)/86400);
    $expiration_sites{$site} = $days;
    $status .= "$site expires in $days days\n";
    print "$site expires in $days days\n";
    if ($days < 25) {
      $error_sites .= "$site => in $days day" . ($days > 1 ? "s":"") . "\n";
    }
}

################################################################################################
sub check_site($) {

    my $site = shift @_;

    # we have an error for sites served via Cloudflare: record type is SSL3_AL_FATAL
    # Net::SSL doesnt support SSL3??
    my $ed = Net::SSL::ExpireDate->new( https => $site );
    #print Dumper $ed;
    if (defined $ed->expire_date) {
        my $expire_date = $ed->expire_date;         # return DateTime instance
        my $time = str2time($expire_date);
        my $now = time;
        my $days = int(($time-$now)/86400);
        $expiration_sites{$site} = $days;
        print "$site expires in $days days\n";
        if ($days < 25) {
          $error_sites .= "$site => in $days day" . ($days > 1 ? "s":"") . "\n";
        }
    } else {
        $error_sites .= "$site has no expiration date\n"; # or has another error, but I'll check manually.
    }
  
}

################################################################################################
sub send_email($) {

    my $message = shift @_;

    my $msg = MIME::Lite->new(
        From    => 'me@website.com',
        To      => 'me@website.com',
        Subject => 'SSL Certificates',
        Data    => "One or more certificates should be renewed:\n\n$message\n"
    );
    $msg->send;
}

################################################################################################
print strftime "%F\n", localtime;
print "="x30 . "\n";

for my $site (sort @sites) {
  check_site_with_openssl($site);
}

# sort desc by expiration
foreach my $site (sort { $expiration_sites{$a} <=> $expiration_sites{$b} } keys %expiration_sites) {
    $status .= "$site expires in " . $expiration_sites{$site} . " days\n" ;
}

print "="x30 . "\n";

if ($error_sites ne "") {
    send_email($error_sites);
}

Automatically update phpMyAdmin

I’m running phpMyAdmin to manage the MySQL databases for the hosting I manage, and I need to keep it up to date to avoid vulnerabilities, bugs, etc.

Or mainly because I want to see up to date in the version box.

I run this script weekly to keep my version up to date:

#!/usr/bin/php
<?php

$cmd = "cd /home/stephan/www/secret_folder/hidden; git clone --depth=1 --branch=STABLE git://github.com/phpmyadmin/phpmyadmin.git && cp -r phpmyadmin/* MyPHPMyAdmin/ && rm -rf phpmyadmin";
shell_exec($cmd);

I keep phpMyAdmin in an hidden folder, protected by a password, because a lot of scripts try to access it.

So, if the URL of your phpMyAdmin instance is

https://mywebsite.ch/secret_folder/hidden/MyPHPMyAdmin,

and is stored in

/home/stephan/www/secret_folder/hidden/MyPHPMyAdmin

the script above will fetch the latest stable release, and copy it OVER your existing version, to keep all your settings intact.

Run it via cron and you’re done. This script has been running for more than 1 year without any problem.

Check for new Dropbox folders on Linux

For a customer, I created a service on a remote server that processes files delivered via Dropbox. Problem is Dropbox on Linux will sync all the folders in its root folder, unless it’s excluded. You can exclude all folders except the one you’re interested in, but as soon as you add a folder to your Dropbox, it will appear on your Linux server.

This script will warn you if a new folder appears. It doesn’t exclude automatically new folders, but this feature could be added if you’re brave enough.

#!/usr/bin/perl
# When using Dropbox on Linux, the complete dropbox folder is 
# sync'ed by default, which can use precious disk space if 
# we only need some folders.
# Because we cant choose which folders will be sync'ed on
# Linux, we can only exclude folders we don't want. So this script
# reports when a new folder is added to the Dropbox top folder.
# A nice feature would be to be able to only allow some folders.
# Note: since we can't exclude files, they are not reported.
# Dont add a large file to the root of Dropbox, you can't exclude it from syncing.

# if this script finds folders not in the allowed list, it sends
# an email and a notification, in case the mail is flagged as spam.

# I choose not to exclude new folders directly in this script,
# in case something breaks. This script is run on a server used by
# a customer as a WebService endpoint, so better be safe.

# To exclude a folder from syncing, use the dropbox-cli script available at
# https://www.dropbox.com/download?dl=packages/dropbox.py
# then do
# ./dropbox.py exclude add "Folder to exclude"
#
# Coriolis Stephan Burlot, Apr 11, 2018

use strict;
use Data::Dumper;
use MIME::Lite;
use WebService::Prowl;

## the path to the Dropbox folder
my $dropbox_folder = '/home/stephan/Dropbox/';

## email settings
my $email_address = 'EMAIL_ADDRESS';

## I use Prowl (prowlapp.com) to send notifications to my phone.
## prowl settings
my $prowl_api_key = 'PROWL_API_KEY';

## Allowed folders
# famous last words:
# customer: "the folder is named TEST_Service, we'll change the
# name when we go in production."
my @allowed_folders = qw/TEST_Service/;

#################################
## sends a email with the message passed as parameter
sub send_email($) {
  my $content = shift @_;
  
  my $msg = MIME::Lite->new(
    From  => $email_address,
    To    => $email_address,
    Subject => 'Dropbox Bot',
    Data  => $content
  );
  $msg->send;
}

#################################
## sends a notification via Prowl
sub send_notification($$) {
  my ($app, $event, $message) = @_;
  if ($event eq "") {
    $event = ' ';
  }
  
  # grab your API key from prowlapp.com
  my $ws = WebService::Prowl->new(apikey => $prowl_api_key);
  $ws->verify || die $ws->error();
  $ws->add(application => "$app",
       event     => "$event",
       description => "$message",
       url     => "");

}

#################################
## MAIN
#################################

# I dont use smartmatch, ie
# if ($file ~~ @allowed_folders)
# so I create a hash for simple matching.
my %allowed = map { $_ => 1 } @allowed_folders;

chdir $dropbox_folder;
if (opendir(my $dh, $dropbox_folder)) {
  my @folders = grep !/^\./, readdir($dh);
  closedir $dh;
  
  # array of bad folders
  my @bad = map { -f $_ || exists $allowed{$_} ? (): $_ } @folders;
  if (scalar(@bad) != 0) {
    print "New folders: " . join(", ", @bad) . "\n";
    send_notification('Linode_Small', 'Dropbox Bot', "There are new folders in Dropbox: you should exclude them.");
    send_email("Hello,\n\nI found these new folders in Dropbox:\n\n" . join("\n", @bad) . "\n\nThey should be excluded.\n");
  }
} else {
  send_notification('Linode_SMALL', 'Dropbox Bot', "I cant open Dropbox folder. Is it still there?");
  send_email("Hello,\n\nI can't opendir $dropbox_folder\n\nIs Dropbox still here?");
  die "Can't opendir $dropbox_folder: $!\n";
}

Enjoy.

Configuring Nginx for HTTPS access

If you manage nginx servers and get the error: SSL_ERROR_RX_UNEXPECTED_NEW_SESSION_TICKET in Firefox or ERR_SSL_PROTOCOL_ERROR in Chrome when connecting to your website:

Error when connecting via Firefox
Error when connecting via Firefox

 

Error when connecting via Chrome
Error when connecting via Chrome

Make sure your config has the following:

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

To be sure, add these params to your http{} blocks, in nginx.conf.

I had these settings in all my virtual servers configuration file for https sites and it worked, but as soon as I added 1 certificate, I had this error. Adding the ssl_session settings to nginx.conf solved this.

curl* will report:

curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received.

* Not all versions of curl will report this: on MacOS 10.13.3, curl v7.54.0 doesnt report an error. On Ubuntu 16.04, curl v7.47.0 reports this error.

source

Tweet Nest support for 280 chars

If you use Tweet Nest to keep an archive of all your tweets, you need a few changes to have all your long tweets stored.

I’ve made a quick hack to solve this temporarily:

– Change the text column of tn_tweets to varchar(512) (if twitter changes the limit again…)
– in the class.twitter.api.php file, replace (at the top):

public $dbMap = array(
  "id_str"       => "tweetid",
  "created_at"   => "time",
  "text"         => "text",
  "source"       => "source",
  "coordinates"  => "coordinates",
  "geo"          => "geo",
  "place"        => "place",
  "contributors" => "contributors",
  "user.id"      => "userid"
);

with

public $dbMap = array(
	"id_str"       => "tweetid",
	"created_at"   => "time",
	"full_text"    => "text",
	"text"         => "text",
	"source"       => "source",
	"coordinates"  => "coordinates",
	"geo"          => "geo",
	"place"        => "place",
	"contributors" => "contributors",
	"user.id"      => "userid"
);

I added a mapping between full_text to text because twitter returns the 280 chars tweets in the full_text field.

in the loadtweets.php file, line 127, add this line:

$params['tweet_mode'] = 'extended';

(before the

$data = $twitterApi->query('statuses/user_timeline', $params);

line)

so Twitter returns the extended tweets.

That’s all.

I posted this as a comment on the TweetNest repo : https://github.com/graulund/tweetnest/issues/91

Custom function in SQLite with fmdb

I’ve used Gus Mueller’s fmdb SQLite wrapper in most of my iOS projects and I’m in the process of migrating an app from Objective-C to Swift.

In this app, I needed a custom function for SQLite to compute the Haversine distance (giving great-circle distances between two points on a sphere from their longitudes and latitudes)

In all its glory*, here’s how I did it:

db.makeFunctionNamed("distance", arguments: 4) { context, argc, argv in
    guard db.valueType(argv[0]) == .float || db.valueType(argv[1]) == .float || db.valueType(argv[2]) == .float || db.valueType(argv[3]) == .float else {
        db.resultError("Expected double parameter", context: context)
        return
    }
    let lat1 = db.valueDouble(argv[0])
    let lon1 = db.valueDouble(argv[1])
    let lat2 = db.valueDouble(argv[2])
    let lon2 = db.valueDouble(argv[3])

    let lat1rad = DEG2RAD(lat1)
    let lat2rad = DEG2RAD(lat2)

    let distance = acos(sin(lat1rad) * sin(lat2rad) + cos(lat1rad) * cos(lat2rad) * cos(DEG2RAD(lon2) - DEG2RAD(lon1))) * 6378.1

    db.resultDouble(distance, context: context)
    
}

db is an FMDatabase, obviously.

This is how I use it:

let rs: FMResultSet? = db.executeQuery("SELECT ID, NO_BH, LAT, LONG, distance(?, ?, LAT, LONG) as distance FROM bh ORDER BY distance LIMIT 50", withArgumentsIn: [location.coordinate.latitude, location.coordinate.longitude])

Which gives me the 50 nearest points from location.

*Any improvement, remark greatly appreciated!

Configuration Réseau d’un Raspberry PI

Je pensais avoir correctement configuré les adresses IP de mon Raspberry, mais en fait, non.

En faisant un ifconfig j’obtiens:

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:14330 errors:0 dropped:0 overruns:0 frame:0
TX packets:14330 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:1072936 (1.0 MiB) TX bytes:1072936 (1.0 MiB)

wlan0 Link encap:Ethernet HWaddr e0:76:d0:cf:18:99
inet addr:192.168.1.21 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::e276:d0ff:fecf:1899/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:579338 errors:0 dropped:232123 overruns:0 frame:0
TX packets:30282 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:57634003 (54.9 MiB) TX bytes:2689604 (2.5 MiB)

Tout OK?

Mais pourquoi ma box (swisscom chez moi) me dit que mon Raspberry PI a une autre adresse?

C’est parce que le DHCP n’est pas désactivé, et le Raspberry a demandé une adresse à la box.

Si on utilise la commande:

# ip addr

On obtient:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
 link/ether e0:76:d0:cf:18:99 brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.21/24 brd 192.168.1.255 scope global wlan0
 valid_lft forever preferred_lft forever
 inet 192.168.1.113/24 brd 192.168.1.255 scope global secondary wlan0
 valid_lft forever preferred_lft forever
 inet6 fe80::e276:d0ff:fecf:1899/64 scope link
 valid_lft forever preferred_lft forever

L’adresse 192.168.1.21 est celle que j’ai assigné, mais l’adresse 192.168.1.113 ne devrait pas être là, elle vient de DHCP.
Pour désactiver DHCP:

sudo update-rc.d dhcpcd disable
sudo service dhcpcd stop
sudo ip addr del 192.168.1.113 dev wlan0

Et voila.

N’oubliez pas de rajouter les bons DNS dans votre fichier /etc/network/interfaces, sinon il n’y en pas par défaut.

source

Installer RetroPie sur Raspberry Pi 3

RetroPie est une distribution (c’est le bon terme?) qui permet de transformer le Raspberry Pi en console de jeu rétro.

Je ne vais pas détailler l’installation, les explications sur le wiki de RetroPie sont suffisamment claires.

Voici les quelques difficultés et solutions que j’ai trouvé afin de faire fonctionner le tout:

Controleur

J’ai acheté un Logitech F310 (40.- chez MediaMarkt).

Controller F310 Logitech

Le plus simple à configurer, mes tentatives avec un 8Bitdo Zero ayant échouées.

Démarrage

Lors du démarrage, il y a un message

a start job is running for LSB: Raise network interfaces (34s / no limit)

qui apparait pendant 1 minute ou plus, le temps que les interfaces réseaux se connectent. Comme je n’utilise que le Wifi, et que j’ai laissé Ethernet actif (on sait jamais), on peut réduire considérablement ce temps en modifiant le fichier

/lib/systemd/system/networking.service.d/network-pre.conf

Il suffit d’éditer ce fichier et de rajouter à la fin:

[Service] TimeoutStartSec=15

Autres configurations

Pour le réseau, j’utilise le Wifi. Le fichier

/etc/network/interfaces

contient

auto lo
iface lo inet loopback

auto eth0
allow-hotplug eth0
iface eth0 inet dhcp

auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
 wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
 address 192.168.1.15
 netmask 255.255.255.0
 broadcast 192.168.1.255
 gateway 192.168.1.1

iface default inet dhcp

le fichier

/etc/wpa_supplicant/wpa_supplicant.conf

contient

network={
 ssid="Mon SSID"
 psk="mon password"
}

Futur

  • J’aimerai avoir un bouton sur le Raspberry Pi afin de pouvoir l’éteindre ou le redémarrer sans passer par les menus.
  • Avoir un boitier un peu plus sexy
  • Rajouter un ventilateur, le Raspberry chauffe un peu. Rien de grave, mais une température plus basse augmenterai surement sa durée de vie.

Autres émulateurs

Lors de mes recherches, j’ai trouvé d’autres distributions de retro-console qui ont l’air pas mal, mais en particulier Recalbox qui a l’air bien. Je ne connais pas les différences avec RetroPie, mais à garder en mémoire pour une future évolution.

 

Hide Xcode 8 console garbage when running the simulator

Since Xcode 8, a lot of debug info appear in the console when using the iOS simulator:

2016-10-24 15:07:11.051609 sosasthma[19813:6302216] subsystem: com.apple.siri, category: Intents, enable_level: 1, persist_level: 1, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0

2016-10-24 15:07:11.070089 sosasthma[19813:6302540] subsystem: com.apple.UIKit, category: HIDEventFiltered, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0

2016-10-24 15:07:11.080159 sosasthma[19813:6302540] subsystem: com.apple.UIKit, category: HIDEventIncoming, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0

2016-10-24 15:07:11.089886 sosasthma[19813:6302537] subsystem: com.apple.BaseBoard, category: MachPort, enable_level: 1, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0, enable_private_data: 0

2016-10-24 15:07:11.101244 sosasthma[19813:6302216] subsystem: com.apple.UIKit, category: StatusBar, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0

2016-10-24 15:07:11.134 sosasthma[19813:6302216] [Crashlytics] Version 3.7.2 (112)

2016-10-24 15:07:11.174840 sosasthma[19813:6302537] subsystem: com.apple.libsqlite3, category: logging, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 1, privacy_setting: 2, enable_private_data: 0

2016-10-24 15:07:11.185172 sosasthma[19813:6302549] subsystem: com.apple.network, category: , enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 2, enable_private_data: 0

To avoid having the console filled with info about siri, UIKIt, etc., just add

OS_ACTIVITY_MODE = disable

to your scheme in Product->Scheme->Edit Scheme
screen-shot-2016-10-24-at-15-06-45

Now you’ll have only your NSLog infos in your console. That’s enough garbage for a developer.

Source: Stack Overflow

Apache htaccess file for .ipa files

To allow my customers to download my iOS apps signed with AdHoc or Entreprise certificates, I use this htaccess file:

<FilesMatch "\.(ipa|plist)$">
 FileETag None
 <ifModule mod_headers.c>
 Header unset ETag
 Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
 Header set Pragma "no-cache"
 Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
 </ifModule>
</FilesMatch>

AddType application/octet-stream .ipa
<Files *.ipa>
 Header set Content-Disposition attachment
</Files>

The « Caches » directives are not mandatory, but large customers are usually behind a reverse proxy and I want to avoid side effects, if possible.

So far, so good.