Tuesday, 24 June 2014

Real-time Attack Maps

One of the greatest advances in cyber security in recent times has been the use of threat intelligence.  We are turning from reactive to proactive defence, and the key is knowing what is happening in cyber space.  These maps show a variety of attacks types: some cover a range and some specifically show attacks such as DDOS.

There are some quite mesmerising real-time plots of ongoing attacks - I've included a few below which I think you'll find as fascinating as do:


Saturday, 10 May 2014

Don't Panic But Pause For Thought

I wrote an article for The Conversation in the wake of the disclosure of the Heartbleed vulnerability.  The article tries to put the problem in perspective and also tries to devine the true lesson from the incident.

It is worth noting that the latest scans sow that 300,000 servers still exhibit the vulnerability so it remains a problem, and one that could easily affect you, meaning that the lessons remain very relevant.

You can read the full article here.




Now The Web Has Turned 25

I wrote a piece for The Conversation reflecting on how successful the Web has been since it's inception 25 years ago.  Part of that reflection was wondering whether the foundations on which this vital piece of modern life is based are perhaps a little shaky.

You can read the full article here.


Thursday, 20 February 2014

What's The Next Reflection Attack

Two years ago we were all talking about DNS reflection attacks and the possibility that they may make an appearance. A year later they did just that, and on a massive scale.  These DDOS attacks that use distributed groups of machines to mount reflection attacks have become known as Distributed Reflective Denial of service attacks or DRDOS.

Sadly, DNS servers were not the only part of the internet that was vulnerable to this sort of misuse, allowing a perfectly valid (actually vital) piece of functionality to be subverted and used to mount a Denial of Service Attack (DDOS). Just as we had been saying a few months ago, other, often forgotten protocols can also be misused to mount DDOS attacks:


And so it was that we saw the largest DDOS attack yet recorded which used the obscure Network Time Protocol (NTP). Those of us who watch such things did see some evidence of such an attack building during the Christmas period 2013: hackers were playing with the protocol to mount small scale attacks.  That appears to have been merely a proof of concept for what was to come some weeks later.

At least we now know the weapons that will be used, right? Personally I'm not sure internauts have quite understood the scale of the problem.  Awareness if growing of the potential size of such attacks but DNS and NTP are not the only tools that could be used.  As I've been trying to say, there are several protocols that hold the potential to be misused in the same way.

The protocols that are potentially vulnerable are based upon User Datagram Protocol (UDP). Many will have heard of TCP which underlies much of what we do on the web, but few realise the UDP is running alongside it.  Whereas TCP is designed to resend packets of data if they fail to arrive at their destination, UDP is more akin to fire and forget. UDP is what is called a connectionless protocol.

For protocols that run on UDP as opposed to TCP, it is slightly easier to spoof the sender's IP address meaning that if the recipient is asked to return some data it can be diverted to an IP address that never requested it.  In essence, this is a reflection attack.  Imagine harnessing many systems who all request some service to return data, and they all pretend to be one target machine.  That target will suddenly be deluged by the responses.

But what protocols run on UDP and are thus liable to be usurped if not protected against. There are many but they include the often used:
There are also protocols used for peer-to-peer networking that could be similarly misused:
There are even games that use protocols that are potentially vulnerable such as Quake Network Protocol and niche communities like Steam.

An obvious question to ask is why have hackers chosen DNS and then NTP over any of these other possibilities.  As ever the devil is in the detail.  If trying to mount a DDOS attack you want to divert as much data as possible to your target, and each of these protocols returns differing amounts, having bandwidth amplification factors such shown here:


DNS
28 to 54
NTP
556.9
SNMPv2
6.3
NetBIOS
3.8
SSDP
30.8
CharGEN
358.8
QOTD
140.3
BitTorrent
3.8
Kad
16.3
Quake Network Protocol
63.9
Steam Protocol
5.5


Once attackers had worked out that reflection attacks were a viable way of mounting DDOS attacks, using DNS, it is not surprising that they opted to next try NTP as it can produce up to 10 times more data.  It also doesn't take long to realise that some other protocols such as CharGEN and even the venerable QOTD which can produce more data than the original DNS attacks.  Perhaps these are the next to be misused by attackers.

Whichever protocol is used in future attacks one thing is certain: mounting such attack is becoming easier.  The reason is that attackers are producing toolkits that allow someone with little technical knowledge to press the button and fire off a DDOS attack based on UDP based protocols.  We have already seen version 1.1 of a DNS based toolkit circulating so I can't believe it will be long before we see an NTP based toolkit or possibly even a toolkit that allows you to select your preferred attack protocol.

However, before you throw up your arms and think the Internet is doomed it is worth noting that there are defences against such attacks.  Since 2000 there has been a standard (BCP38) which shows ways of defeating attacks that use IP spoofing.  Needless to say there are many commercial products that will help you do this but I don't intend to recommend any in particular if only because in choosing such products context is everything. 

One very useful place to start is the Spoofer Project which aims to help you understand the susceptibility of the Internet (or at least that part which you inhabit) to IP spoofing.

DRDOS attacks are here to stay and 2014 is likely to see them growing in size and number.  As with all DDOS attack you won't stop them but you can mitigate them significantly.  The trick is to be aware and get prepared.





Friday, 13 December 2013

The 12 Cyber Scams of Christmas (2013)

In an effort to raise awareness of some of the cyber scams that people might runs foul of this Christmas, I've published a quick guide to 12 to watch out for, on the BBC.  There are many more but hopefully this will give you the idea of what to beware of: http://www.bbc.co.uk/news/technology-25200338

Monday, 9 December 2013

How Are Passwords Stored?


The past two years have seen some dramatic leaks of passwords including from well-known names such as LinkedIn and Adobe. These events shone a light on how our passwords are being stored. If someone is daft enough to store our passwords as plain text then they do not deserve to be trusted with them. Most attempt to protect passwords by using a “hash” of our passwords.
Hashing function have been around since the early 1950s and were developed to allow, for
example, fast comparison of database entries to see if there were duplicates.  Many hash functions have been developed over the years but they all do basically the same thing: they take an arbitrarily long set of characters and transform it into a fixed length, much shorter string of characters.  For the same set of input characters you would always end up with the same output. However, the likelihood of ending up with the same shortened output from differing input should be negligible: known as a “collision”.
Why does that help? Well, on the relatively slow machines of the time it was better to compare shorter strings of characters when looking for matches. Plus the development of hash functions focussed on making the hash function very fast. Hence, producing hashes and using them to find, for example, a match was significantly faster than trying to do so using the original data.
Then came the development of “cryptographic hashes”, which most refer to today simply as “hashes”.  These secure hashes are like original hash functions except that they put extra emphasis on preventing someone from determining anything about the input based solely on the hashed value: a one-way or trap door function. It was very difficult anyway as in compressing the length of the data to produce the hash you have always lost information: so called “lossy compression”. But cryptographic hashes are tested specifically for their ability to prevent reversing.
An obvious use for these cryptographic hashes was for password management.  Instead of storing our passwords in plain text, a system could now receive our password, hash it and compare it with the stored hash.  If the two matched it was almost certain that the password we had sent was correct. Hash functions appeared to do this with names like SHA1 and MD5, with some becoming standards recommended by many governments for securing passwords on their systems.  As time has passed researchers have found that some of these hash functions have weaknesses and so are not quite as “one way” as had been hoped.  Hence, you start to notice that major vendors have begun to retire certain algorithms in favour of newer ones.

Unfortunately, as time moved on computer became faster and faster….and faster still.  So much so that even your home computer is capable of undertaking millions of comparisons a second.  Plus the hashing algorithms have become well known if only because systems developers were encourage to implement them to protect passwords.   This led to the development of what is known as the “dictionary attack” which rely upon simple brute force. 
In essence it’s simple. You take a dictionary of words that might be used as passwords, you hash it yourself and you compare your resulting hashes with the hashed password you have access to. When you have a match you look back at your dictionary to see what the original plaintext word was ie the password.  As it still takes an appreciable time to hash the dictionary you are using to mount the attack then people began pre-computing the hashed forms of the dictionary.  The resulting set of hashes became known as “rainbow tables”.  Now all you have to do is compare stolen hashed passwords with your rainbow table, find a match and look back in your index to find the original word/password.
Using these techniques hackers have been able to steal huge sets of hashed passwords (sometimes hundreds of thousands) and almost before the keeper knows they are missing the hackers can have computed the original passwords. The answer is to add a touch of salt.
A “salt” is a randomly generated set of characters which you add (before or after) your password characters and then pass it through your hashing function. Now the hacker’s dictionary or rainbow tables should theoretically be useless. But, as ever, whilst the theory is sound the way system developers sprinkle their salt can give the hackers another route in.  Typical mistakes are:

1.       Choosing a random character string that is not truly random.  Computers have great difficulty in generating anything that is random so this can be difficult and some developers in the past have taken short cuts assuming that no one will guess how they have generated their “random” characters. They were wrong.

2.       Choosing a random character string that is too short.  If it is short enough there are only so many possible characters that it could be so it is possible to calculate all possible values and simply add those to your dictionary.

3.       Using the same random character set for every password. One of the greatest helps a cryptographer can be to a cryptanalyst (who is trying to break their code) is to reuse the same string of characters.  Once found, this salt will allow the attacker to compute all the passwords almost as if the salt had never been added.

Ideally systems would store the salt on a separate system to the username and hashed password. However, practical considerations often mean this is not done so the hacker might be able to obtain the salt as well as the username and password.  From this they can of course then simply compute the original passwords.  However, because of the way in which it has to be done it is a much slower process and if a hacker is attempting to crack thousands of passwords the process will take much longer than they want.  So, hackers have moved on from using computers as you might recognise them to harness one particular part of your computer: the Graphics Processor Unit (GPU).
Whilst most people have been aware that the processors in their home computers have become faster and faster, the GPU has been silently developing to achieve quite astronomical speeds.  They can achieve such speeds because they are dedicated to very specific types of computing such as decoding video or generating 3D graphics.  GPUs can be optimised to dedicate more of their processing power to these graphics functions – they don’t need to be able to do the general purpose functions that your Central Processing Unit (CPU) which is the brain of your computer must be capable of.
However, for some time now hackers (or particularly “password crackers”) have worked out how to combine many of these GPUS together to produce your own mini-supercomputer.  They sit on a desktop and can be built from parts routinely available on the Internet.  The software needed to run these GPUs in parallel and the software to make use of them to crack the passwords as explained above are freely available to download, if you know where to look.  Suddenly, although salted hashes makes it more difficult, the arms race swings back in favour of those seeking to find your password.
But, the war is not over. It might seem obvious, but it is only relatively recently that those seeking to protect passwords have started to research hashing functions that are deliberately slow.  Whereas, because of their original purpose, hash functions were always designed to be fast and efficient, some of the latest hash functions are deliberately slow. The idea I that you cause the hackers/crackers so much inconvenience, even with their home built supercomputers that they move onto easier targets.  You can’t stop them eventually calculating your password but you can make it take a long time.

There is one way that you can help enormously: choose a “strong password” which is simply a set of characters that is unlikely to appear in the hacker’s dictionary.  That’s why many system insist that you use unusual characters in your password.  For example, if you chose a phrase like “my dog has big ears”, you could write that as “Myd0ghasb!gears”.  The other thing you can do is not to reuse passwords.  Much easier said than done but sadly not all systems are developed to the same high standards so your password is only as secure as the weakest of those systems: pointless having slow salted hashes on one system I the same password is stored on a system storing your password in plaintext.

Monday, 14 October 2013

Security Using Biometrics: What You Need To Know

The recent release of fingerprint scanners on various smartphones has again raised interest in the use of biometric data to secure our electronic devices.  How much simpler to touch your phone to unlock it than to have to punch in that four digit PIN (or if you understand the vulnerability of four digit PINs, punch in that six digit PIN).  Making security invisible is always attractive - regardless of how security minded the user they all consider it an inconvenience having to unlock their phone.

But is the use of biometrics really the answer?

Let's start by explaining what is meant by the term "biometric". Biometric technology uses electronic methods to identify a person by a variety of  unique physical characteristics. The best known are face and fingerprint recognition, and iris scanning.  Others continue to be developed and some of the more recent forms of biometric technology utilise not just your physical characteristics but also behaviours eg the way in which you walk.

Particularly face recognition and fingerprint recognition made an appearance several years ago on some laptops.  It didn't catch on.  The primary reasons were:
  1. For any chosen biometric feature the recognition algorithm has to allow for some "flex". Any living thing does not stay exactly the same from moment to moment never mind between logins. However, there is nothing that annoys users more than false rejection.  When it is the legitimate user they expect to be recognised 100% of the time, not 90% or 80%.  This led to some recognition systems having such large tolerances that they erred on the side of granting, for example system access, when it wasn't a real match. The only thing that annoys users more than being locked out of their own system is when they system erroneously grants access to others.
  2. Historically, biometric systems have not been very good at recognising living creatures (especially humans).  Hence, there have been many stories in the press about, for example, fingerprint systems being fooled using everything from a photograph to a gummy bear. Even the recent iPhone fingerprint recognition system was allegedly hacked by the Chaos Computer Club using "lifted" fingerprints within days of release of the device, although it's not quite as easy as the video makes it appear.

Plus, the detection devices are improving.

The iPhone sensor is what is known as a capacitive sensor in that it detects not simply an image of the fingerprint but searches for the profile between the top of your fingerprint ridges and the troughs. 
But sadly it doesn't look like it is yet fool proof, even if it is making life more difficult for would-be hackers.

I suspect this is something of a renaissance for fingerprint recognition.  Smart phones are a slightly better platform because not only has the technology evolved since it first appeared on laptops, but also mobile phones have the potential to act as a means of providing universal two factor authentication.

We have already seen online services from major vendors using text messages to send authentication codes to supplement passwords. With the move to make security as transparent as possible, fingerprint recognition is an obvious way to prevent such a code "easily" falling into the hands of someone who has unauthorised access to both your password and your phone.

All of which begs a question. How secure is that biometric data?  If someone stole your phone could they then steal your biometric data and impersonate you on other systems? 

As always the devil is in the detail.  Fingerprints are usually not stored as an image but rather bifurcations (changes of direction or splits) in the ridges are mapped and it is those that are stored and then compared when you place your finger on the sensor.  But, this is still useful and potentially could be misused which is why it is vital that any such data is stored securely: encrypted or some other secure storage that prevents unauthorised users simply walking away with your biometric data.

We should assume use of biometrics in security are here to stay but whenever you see it in use I would recommend you explore two questions:

1. Is it the only means of securing your device?  If so, be very careful that it has not already been circumvented.
2. Is it stored securely? Many will use woolly terms such as "encrypted" but it's important that the manufacturers state, for example, what encryption is being used. 

As ever in security, the weakest point in the security chain defines the true strength of the security.  Don't rely upon something that has a weak link.

Thursday, 3 October 2013

Spying On Financial Transactions

Allegations have surfaced about how various law enforcement and intelligence agencies might monitor millions of financial transactions around the world.  But, putting aside the emotional reaction to being monitored, how much should we really be concerned as individuals?

In this article I try to give a balanced perspective of just how much these alleged operations would be an invasion of privacy.

How Big A Problem Are Solar Storms

Recently attention has refocused on the potential damage that solar storms could do to vital equipment on Earth.  In this article I try to explain how this risk should be viewed.

Wednesday, 7 August 2013

What Glues Together The Internet

The Internet operates in a way can sometimes seem like magic. Data not only knows where to go but also what route to take. That routing is vital to the successful operation of the Internet, as without it data would literally get lost or go via systems that would render the journey so slow as to be useless. And, it all relies upon something called Border Gateway Protocol (BGP).

As with other protocols, BGP is set out in a standard from the Internet Engineering Task Force (IEFT) called RFC 4271.

Relationships Between ISPs
To understand BGP one needs to start by realising that the Internet is a series of networks that are interconnected.  Hence, the terms "Internet".  This requires those operating that series of network (known as peers) to have a means of agreeing how data is passed between them, and how data will transit their network so that it can reach the network of a third party.

Those that provide access to the Internet have a reasonably complex relationship with each other.  There are different "tiers" starting with operators of the largest networks as Tier 1 (eg Google, Microsoft, et al) down to Tier 3 providers who might well be those who ultimately provide access to you as a home user.

Some relationships are direct between peers but some interconnectivity is also provided by a global network of Internet Exchanges such as the London Internet Exchange (LINX).

How traffic is directed through the Internet can be thought of in two parts:

  1. The way in which data is routed within an Autonomous System (AS), which is a part of the network that is under the control of a single organisation.  It uses protocols such as Open Shortest Path First (OSPF).
  2. The interconnections between the AS's. This is where BGP is used, and it advertises a network within an AS to it's peers.  It doesn't say how data will be routed within the AS but it does says how it is connected to other networks, including those IP addresses it uses.
Most users are aware that they are given an IP address when they connect to the Internet. The tricky part is when you attempt to send data to another IP address: you need to know what network it is on so that you can decide who to send it to to be routed on its journey. The problem is that there is no central authority to which you can refer.  Whilst there are those who allocate IP addresses but there is no definitive list you can check.

However, all of this information is shared between networks using a set of "routing tables" but these tables are updated and exchanged on the basis of trust between peers. All of the routers under the control of a particular ISP rely upon the data it receives from another ISP. And there's the rub.

If someone were able to corrupt these routing tables then they could spoof IP addresses ie they could have data intended for a particular address sent to them.  Not a trivial a task. Not something for those lacking in technical ability. But, if someone were able to gain control of a router run by an ISP it could be done.

So, how easy is it to gain control of a router? Not surprisingly the ISPs have been making it more difficult over time, and they guard access, so it is not trivial.  However, there are many ISPs (estimates are up to 40,000) running very many routers so it's not unknown for some to be left with default passwords, or even for back doors to emerge that allow remote access. Hence, whilst not easy the effect of it happening across swathes of the Internet are profound.

BGP spoofing is very difficult to defend against. There are ways to mitigate attacks but no universal defence exists (that I know of).

The outstanding question is how prevalent are such attacks?  I'm not sure anyone really knows.  It's certainly an area worthy of further research.  It is a topic that has not been discussed as widely as other attacks, primarily because other forms of attacks are considered more damaging.  However, I can't help thinking that BGP spoofing could be used as a means of delivering the more damaging attacks and as such it really needs to be understood better.