WEB AND SOFTWARE DEVELOPER
zSERVER SETUP - DEVELOPMENT
CRYPTOGRAPHY - TESTING
SECURITY - ONLINE PRIVACY
DIGITAL RIGHTS

InfoSec

HTTP/3 QUIC Protocol NEEDS to Change, how to Disable it


This has been on on going discussion between administrators ever since Google started using the HTTP/3 QUIC Protocol. What is HTTP/3 QUIC? It is now a hidden UDP gateway that encrypts traffic direct to a Gogole service. Problem? The problem is that administrators cannot see any network information in netstat (in Winows, Linux, Android or iOS) for what the protocol is connected to. This is beyond bad, if you are using any type of VPN (Enterprise or other) you have no idea what connections are going through the HTTP/3 QUIC protocol. I am pretty sure viruses and malware creators will be jumping onto this protocol as traffic is next to impossible to track (unless you are an enterprise with Hardware connection checking), even with checking you do not know what the protocol is connecting to. A lot of other services have started to use the protocol.

Google changed the protocol from using outgoing UDP 443, to creating it’s own ‘hidden’ gateway. In netstat during the use of the protocol, every connection is a separate hidden gateway (there is no way of seeing IP data) where as with normal TCP and UDP connections you see every connection, with IP data for every connection. Netstat was meant for this type of checking by admins and users to check for virus / malware connections and other uses for application Firewalling. In Linux normally anything with networking and gateways requires ROOT privileges (I will be pushing for Debian to block this protocol), this gateway creation does not for some reason. Why have google designed it this way? Because they want to hide the gateway information and what is going through it, no other logical answer. Organizations need to block this protocol completely.

Another thing I do not get with browser creators: why are you force enabling ‘Experimental Protocols’ by default? You are giving one Company a protocol, auto enabling it without questioning it.

How to Disable HTTP/3 QUIC

As this protocol is built into most browsers you have to disable it on every browser profile, for whatever browser you are using.

Google Chrome
In the browser address bar, type chrome://flags. Disable the Experimental QUIC protocol option.

Microsoft Edge
In the browser adddress bar, type edge://flags/. Disable the Experimental QUIC protocol option.

Mozilla Firefox
In the browser address bar, type: about:config. Search for and disable network.http.http3.enable

Opera
In the browser address bar, type: opera://flags/#enable-quic. From the Experimental QUIC protocol drop-down list, select Disabled.

Debian 12 – PHP 8.2 – Nginx


As Debian 12 came out recently I thought I would do some PHP 8.2 testing on it.

VPS Setup:
1x Intel Core 3Ghz (of multi-core processor)
1024mb RAM
120GB SSD

OS:
Debian 12 - Release 11-06-23

Services:
Web Server: Nginx 1.22.1
Hypertext Preprocessor: PHP 8.2.7
MySQL Database: MariaDB 10.11.3
HTTPS Encryption: TLS 1.3 RSA 2048
Firewall: Nftables (200kb Live Server Ruleset)

Server Boot

Server Total Usage:
157mb

Process Memory Usages:
php-fpm: master process 2mb
php-fpm: pool www 2mb
php-fpm: pool www 2mb
MariaDB 9.6mb
nginx: worker process 0.1mb
nginx: master process 0.1mb

After loading blog site – receiving data from MariaDB

Server Total Usage:
178mb

Process Memory Usages:
php-fpm: master process 2mb
php-fpm: pool www 2.1mb
php-fpm: pool www 2mb
MariaDB 9.6mb
nginx: worker process 0.12mb
nginx: master process 0.11mb

Page Load Times:
General blog page load times direct from php, page loads 10 blog entries. This is not using a reverse-proxy or a cache.

184ms Page Load - General Blog Site

Debian 12 “bookworm” Changes


A new version of Debian was released a few weeks ago, Debian 12 “bookworm”. I have created a list of the changes below.

This new version of Debian “bookworm” contains over 11,089 new packages for a total count of 64,419 packages, while over 6,296 packages have been removed as “obsolete”. 43,254 packages were updated in this release. Debian 12 “bookworm” is made up of 1,341,564,204 lines of code.

System

  • Linux kernel 6.1 (from 5.10)
  • systemd 252 (from 247)

Web Servers

  • Apache 2.4.57
  • nginx 1.22.1

Programming Languages

  • PHP 8.2 (from 7.4)
  • Python 3.11.2
  • Rustc 1.63

Database Servers

  • MariaDB 10.11
  • PostgreSQL 15

Architectures officially supported:

  • 32-bit PC (i386) and 64-bit PC (amd64),
  • 64-bit ARM (arm64),
  • ARM EABI (armel),
  • ARMv7 (EABI hard-float ABI, armhf),
  • little-endian MIPS (mipsel),
  • 64-bit little-endian MIPS (mips64el),
  • 64-bit little-endian PowerPC (ppc64el),
  • IBM System z (s390x)

32-bit PC (i386) no longer covers any i586 processor; the new minimum processor requirement is i686.

Cloud Computing Services:

  • Amazon EC2 (amd64 and arm64),
  • Microsoft Azure (amd64),
  • OpenStack (generic) (amd64, arm64, ppc64el),
  • GenericCloud (arm64, amd64),
  • NoCloud (amd64, arm64, ppc64el)

Desktop Environments:

  • Gnome 43,
  • KDE Plasma 5.27,
  • LXDE 11,
  • LXQt 1.2.0,
  • MATE 1.26,
  • Xfce 4.18

There is currently a slight problem with memory usage reporting using free -m, the Debian team are currently working on this. Overall for servers it seems a lot faster, but with a slight increase in memory usage. I am currently testing PHP8.2 on a server, post to be put up soon.

Misconceptions about Reverse-Proxies


I’ve seen a lot of misconceptions about reverse-proxies recently by amateurs setting up shared hosting websites, which in turn leads them to believe that encryption is broken.

Reverse Proxy must mean anyone can Proxy TLS data… WRONG

A reverse proxy is not a conventional sense of a proxy in the case of proxying HTTPS TLS Connections. The web browser encrypt all data between you and the server IP received from the DNS request at EVERY Network point, this includes your local network. I think the biggest misconceptions come from click tick box hosts like Wix as it does all the stuff for you. You have to place the origin servers TLS certificate key to the proxy and set the DNS record to the Proxy. For any tech person this should be a big giveaway as the browser sees this and connects to the proxy, NOT the origin server. Then a second encryption tunnel is used between the proxy and the origin server. So encrypted packets are never proxied direct through a reverse proxy, it would be impossible to do. I’ve also heard people say “If I use the same cert it can go straight through”. No, if you think this you don’t understand Transport Layer Security, the proxy and origin server will NEVER give out the same session key (which verifies the connection).

Reverse Proxy makes my Shared Hosting Server Secure… WRONG

The second thing people don’t understand is ports are still open on the origin server, you need to firewall them… which you can’t do on a shared host. If a shared hosts server is not updated for 7 years (90% of them) apache will have vulnerabilities and the origin server is open to the entire internet still. The only way of securing a reverse proxy and ensuring hackers don’t figure out the origin servers IP (trust me — they can — if I can do it so can any hacker) is through firewalling, the only way you can firewall is with a VPS or Dedicated Server. This is why all Security experts tell people not to use shared hosting any more. You will never secure a shared hosting server that is open to the world.

The misconceptions about how reverse-proxies work leads people into thinking that encryption is broken, I have seen it so many times. They then think that anyone else can proxy encrypted data. I know someone who now claims “I don’t know why people use VPN’s because they can log all your TLS traffic” because he saw the words reverse-proxy, he now thinks anyone can de-crypt encrypted traffic. These people don’t understand the basics of Transport Layer Security and they don’t understand how the browser (or any other application) uses it.

Debian 10 (buster) Changes


Most notable changes to Debian 10 (buster). Removal of ifconfig from the net-tools package – changed to using ip – which falls in line with other distrubtions. Here is a brief overview of some version changes:

System

  • Linux kernel 4.19 (from 4.9)
  • systemd 241 (from 232 – which has forced many changes)

Web Servers

  • Apache 2.4.38 (from 2.4.25)
  • nginx 1.14 (from 1.10)

Programming Languages

  • Go 1.11 (from 1.7)
  • Node.js 10.15.2 (from 4.8.2)
  • PHP 7.3 (from 7.0)
  • Python 3.7.2 (from 3.5.3)
  • Ruby 2.5 (from 2.3)
  • Rust 1.34 (from 1.24)

Database Servers

  • MariaDB 10.3 (from 10.1)
  • PostgreSQL 11 (from 9.6)

Moved from iptables to nftables for firewall rules and packet filtering.

The biggest change for me as a SysAdmin is the move from iptables to nftables. I have wrote modules for iptables rules and rulesets within my cross-platform Linux Administration software. I will be writing some new software for nftables. The official Debian documentation is not great; they placed a link to the official nftables wiki, but the tools which you need to use (which seem to be half written) cannot be installed on Debian 10. There is an apt package with the tools to convert iptables rulesets to nftables rulesets named iptables-nftables-compat; as stated you cannot install this with a default Debian 10 apt list – so you may have to do this before upgrading. They state in the official documentation for nftables to use a tool named iptables-restore-translate; I have used this tool but still had to go through my rulesets to change certain things that it did not pick up. Be very careful using these tools as not everything in iptables rulesets gets translated correctly. I am seeing a lot of posts about this move from sysadmins online at the moment.

Another big change you will notice is the move to using systemctl for a lot of commands that you used to be able to run in bash, this is down to the new systemd version.

Shared Host or Linux Pro Server?


I’ve posted a lot against shared hosting on here in the past and thought I would write a quick review with some facts and information on why shared hosting is so bad. People are saying “Hey I get the same site with shared for free as I do paying a Linux Expert”… No you don’t, in this article I will go through differences with examples of insecurities.

Shared Hosts have no way of firewalling or blocking ports because they have to be open worldwide for worldwide customers. Which means no-one using the shared host can secure the server properly. Secondly most shared hosts never update their servers which leads to unpatched vulnerabilities that hackers and agencies can use to hack said server. Customers to shared hosts have zero access to vital server logs; such as email server logs (because the service is shared across thousands of other domains), which is the main area admins need to look to see attempted, successful and blocked attacks. Shared hosts I have seen recently have as many as 100,000 domains signed to a single IP. This doesn’t necessarily mean they are running all of them on one server (could be a cluster of servers on a single IP) – but in 90% of cases that is what they are doing.

Dedicated Secure setups by Linux admins are completely different as they don’t have to set ports to be open world wide – which means the linux admin can lock-down ports to only allow the server owner and customer access to them. Not only that they can also lock-down ports to specific IP’s for say receiving mail (IMAP etc), so only the customer can access the emails from said server. Also they can setup reverse-proxies with firewall rules: in a reverse proxy setup the ports on the real dedicated server are locked to the proxy server – only the proxy server can see the ports are open and connect to them through a single IP. This brings about security in many ways; online scanners will never pick up the ports being open and hackers won’t know where the real server is or find out the real servers IP.

Shared Hosting example of Vulnerabilities:

Load Time

These shared hosts that don’t get updated for 11+ years are causing even more problems worldwide, as hackers are using those servers to piggyback and attack from. We see it in server logs all the time – vulnerable shared hosts being used for both attack and scanning. When shared hosting first started it was widely used as servers costed a lot due to servers not being very fast. These days there is no benefit to using shared hosting – if you do use it and are happy using it don’t be surprised if your emails and data are leaked online. By law now Companies are liable for data protection which means the servers they use must be secure if they want to run a shop. If a shared server is breached and your customers details are leaked, a shared host will tell you that you are liable not them. Shared hosts are not meant to be used for payment sending or online shops even if they have TLS. A lot of amateurs creating websites think because they have TLS they are secure – this is false.

What happened to UK Tech?


I recently watched a talk by Bruce Schneier (Cryptographer and Computer Security Expert since the 1970’s) and it got me thinking about all the things that have destroyed UK Tech. The problems outlined by Bruce in the talk with Governments not understanding how Tech works – but still pushing through policy. The video also talks about IOT and future problems IOT will cause (placing everything on the Internet that doesn’t need to be). You can check the video out at the bottom of this post.

Going back to the title of this article: What has happened within UK Tech over the past 10 years? Software Companies have closed and all big tech based retail websites have moved their servers out of the UK. Why? Well unless you keep up on the Industry you probably won’t know why. The Tech Industry over all are a tight knit community – people from all over the world communicate on Social media (mainly twitter – all security people are against Facebook) about Information Security, Development and Policy changes. This keeps a vast majority of Servers Secure through information sharing. To Secure systems you need to know how attacks work and what vulnerabilities can be used against a system, which means information sharing and instant access to information about service or software vulnerabilities is vital to having secure systems and networks. Insurance Companies also have a big interest in Information Security as they have to check liability.

The Policy underlining the decline in UK Tech began when the Tories got into power. When Theresa May was in position of Home Secretary she spent her whole time moaning about Tech (see the story continuing?). She publicly announced that 1) She was going to force Tech Companies to log all Internet Activity into a National Database and 2) Remove Human Rights to do this. Privacy falls under international law under the UN as a Human Right. A lot of people don’t seem to realise the difference between posting things publicly Online and what is Private, so I will explain now: Private Internet Browsing means collecting every page you browse, not just information you post online. This would mean (if it worked) everything you browse is logged into a National Database that can be viewed and edited to create false positives. This also means other Private information can be disclosed (emails, form data, address, phone numbers etc). Law enforcement has a massive upper hand already with Tech – they have more information now than they have had in human history.

In the talk Bruce states that pushing Policy without understanding is more dangerous than non-technical people think. Not only does bad Policy introduce insecurity, it also effects Economy within Countries; the Tech Industry is the biggest Industry in the World right now – that is fact. Recently we’ve seen even worse policy pushed in Australia where they have pushed through Laws to backdoor all Encryption – this will never work in practice and they will have no way to enforce it. Security people within ICT have been constantly stating: “A Government Only Backdoor is Impossible”. If you have a back door in your systems hackers (a minor threat) can access the data through that back door and find the back door, furthermore enemy Intelligence Agencies (a massive threat) will also find these back doors and access information. Not only can they use the back door’s to find information (boring – they want network access), they could potentially use the back doors to get across entire Internal / External networks and use those networks to attack from. In a public sense back dooring Tech systems would mean no-one would be able to purchase anything online as with any Encryption back door – you will be making Encryption Irrelevant thus people will be sending Credit Card and Personal Information non-securely. This will never work and will never be enforceable – Companies will have massive liability for the data if someone finds the back door – Insurance Companies will be liable not the Government. Any Company looking at this kind of wording is going to do what? Yes – move all their data out of the said Country proposing these Policies. This is exactly what has happened in the UK under the Tories.

Related Links:
 1 - Bruce Schneier: "Click Here to Kill Everybody" | Talks at Google
 2 - UK Surveillance Regime dealt another blow in court

IBM: Cybersecurity – skills matter more than degrees


IBM believes skills matter more than degrees. Motivated employees who love their work — and demonstrate ambition and a willingness to advance their skills — can learn what they need through a combination of on-the-job training and ongoing education.

Something a lot of people have been saying in InfoSec and Cyber Security for years: Skills matter more than degrees. Leading security companies realise this and IBM is a leader in computer and server Security for sure.

I have helped a number of people with both security and server setup who have degrees in computer science. They didn’t keep up on training and were having problems understanding how server services and ports worked on the server – plus couldn’t understand documentation on server service configs. The sad thing is a lot of companies don’t follow what IBM are saying and hire people purely based on having a degree, then wonder why their data is not stored securely or their servers are 10 years out of date (yes I’ve seen this happen by smaller web companies).

People think they know what they’re doing by setting up Shared Hosting. There is a distinct difference between using shared hosting and being capable of setting up a secure server cluster with firewall rules and reverse-proxying. Plus with shared hosting you have no access to firewall rules for the server. The servers are open to attack from across the world – they have to be open world-wide because hosting companies aim to get customers world-wide.

I keep saying this in posts: big companies looking to hire people can see how competent they are from how their Websites and Servers are setup. They can also find out competence through web validation techniques. I see a lot of small companies and Tech managers who don’t understand this.

Linux Server Administration Software


I’ve been writing some cross-platform Software for Linux administration, mainly for firewall rules and log threat mitigation. You can do some automated threat mitigation, but for mail servers (as an example) there are a lot of things you can’t automate – this was the main reason I wrote this Software.

If you think about mail server whitelisting + greylisting a lot of valid services have blacklisted / blocked IP’s (Google, Mailchimp, Social Networks etc). So you can’t really set an automated service to ban IP’s or ranges just based on them being blacklisted. There are still a lot of things administrators need to do, which is why I stated before: it is important for administrators to check logs daily (if possible) for log threat mitigation. You need to check what networks the blocked IP’s are from before blocking them in firewall rules (if you intend to do so – personally I do). As I stated before – if I know a certain server or customer is not going to receive email from a certain Country I can block email server access from those countries. You can achieve web + mail separation by using a reverse-proxy – or you can set firewall rules through Fail2Ban for a specific service.

The Software has automated log checking and can automate scripts for server real-time mitigation (for other ports / services) – but was predominantly wrote for administrators to use at their desk – not live on the server. I have added database functionality where admins can import blocked ranges or ip’s from a text list to an sqlite3 or sql database. You can also create a ruleset for iptables from a list of ranges or IP’s on your local computer then upload the file to a server – every ip and range are verified during creation – so if you accidentally typed an invalid IP the firewall won’t kick you out when the rules go live! This way of importing a large number of ranges reduces server memory usage – over importing every line through the server. Although the software has options to create specific iptables command lines for single IP’s and Range’s (including specific fail2ban options) too. More info to come soon.

Mail Servers + IBM X-Force Exchange


I wrote an article a while back about mail servers and protecting against repeat attackers. If you have blacklists, SPF and DKIM setup on your server – know what you are doing with mail servers and understand how proxies and blacklists work, you shouldn’t have many (if any) spam emails getting through.

You may see repeat connections from the same IP or range over multiple days, weeks or months in your server logs (as explained before). Personally if I see the same IP being dropped by spam services or filters at multiple times, I place a ban on that IP or range. If you know that you are not going to be receiving emails from certain Country, there are ways of blocking Countries in IPTables. If you don’t want to ban these IP’s there is a way of checking how much of a risk they are, and if they are pushing malware.

I would suggest the use of IBM X-Force Exchange to any sysadmin with an interest in network risk. I would say the main use for X-Force would be for sysadmins running email servers. IBM X-Force tracks IP addresses that are repeat offenders of current known attack models, which include: IP Scanning, Brute Forcing, Spam Mail, Malware Distribution, App Targeting. The site rates the risk between 0-10. It also provides information about the IP including: Origin, Network Owner, Whois info. The interesting thing about the spam information is: They provide the option for people to upload mail samples and malware samples received from the specific IP and display whether the IP has pushed malware before. You can also setup scripts to import, say; spam IP’s over risk 6.

Related Link:
 1 - IBM X-Force Exchange