Thursday, 4 February 2016

GCHQ Christmas 2015 Puzzle Solution (Corrected)

GCHQ have now put up their solutions to their Christmas 2015 quiz.  It can be found here.

As you can see by comparing to my suggestions, I got several answers right but for the wrong reasons.

Also in the Part 5 worduko the solutions I show contains the messages I found but they were after a more obscure set of letters from the top row ie the ones that are consistent with the messages in the shaded squares.

How To Solve The GCHQ Christmas 2015 Puzzle (Possibly?)

Now that the competition is closed I thought it was safe to publish this blog post.

I'm sure some of this is wrong but I'm fairly sure some is correct, so I thought I'd have a go at explaining the sorts of approach one might take to solving the GCHQ Christmas 2015 puzzle.  It's entirely possible I may have arrived at the right answer for the wrong reasons.

Where it's wrong or you can think of a better way to solve it, or explain how to solve it, please let me know.


Part 1

The initial puzzle that was delivered with the GCHQ Christmas card was this:



It is a nonogram.  There are plenty of sites that provide these types of puzzle, for example, http://www.puzzle-nonograms.com/

To solve a puzzle, one determines which squares should be shaded and which should be empty. Discovering which squares fall into which categories is equally important. One should never guess: only if the status of a cell can be determined logically should it be shaded or left blank in the final solution.

Techniques for solving these puzzles can be found on Wikipedia under “nonogram”: https://en.wikipedia.org/wiki/Nonogram

Securing RFID Chips

If you ask most security advisers about how to secure your electronic devices, somewhere in the advice you'll hear: use an up to date virus checker, keep your operating system up to date, and so on. Sadly many attacks do not rely upon hitting the security head on but rather they look for information leaking in other ways: so called side channel attacks.

If you can physically access a device you can sometime use the likes of power usage to look at patterns that reveal much of what apparently hidden but the security features of the device.  We have known for many years how computer memory can be gleaned even when using the strongest encryption such as the Advanced Encryption Standard (AES).

This was recently exemplified when it was discovered that some virtual machines could extract data via leakage from other virtual machines that share the same physical platform.

With the increasing use of Radio Frequency Identification (RFID) one concern has been that such chips could be subjected to these side channel attacks.  as they are used to store increasing amounts of sensitive (particular personal) data such leakage is an obvious place for hackers to target.  And of course, the rise of contactless payment systems has brought a whole new impetus to this form of attack.

To date most of the security measure have not addressed the hardware. They have looked at alternative methods of protecting the data on the chip. Typically they rely upon, for example, changing the private key regularly so that repeated execution of the encryption algorithm doesn't allow cryptanalytic attacks to be used. 

The thing the attacks have in common is that they rely upon the fact that the card reader powers up the chip every time it is used.  Hence, this power consumption can give indications of what is being placed in memory.  In practice you have to go through this cycle thousands of times, and assuming there is not changing of the keys, in order to determine memory contents.  It's non-trivial but has been demonstrated.

So, researchers at MIT have now come up with a chip design which they hope with make side channels attacks that require physical access, more difficult.  It is intended to protect specifically against attacks that use power usage to determine memory content.

In essence, there are two new elements:
  • A built-in power supply that is practically impossible to disrupt
  • Some “nonvolatile” memory that will retain data when the power fades
Thus, now each time the card is used a small capacitor is charged on the chip to act as a power supply and it is that which is then used to run the chip.  Likewise if the power is interrupted memory is maintained rather than having to be repopulated each time it is used.

In this way the classic side channel attack is simply defeated: the power fluctuations are no longer directly related to memory contents.

Texas Instruments have already built some prototypes and it seems they work.  Don't be surprised if this technology starts to appear on a card near you in the not so distant future.

Saturday, 30 January 2016

Anonymity vs Pseudonymity In Cryptocurrencies

I wrote earlier this week about some of this misconceptions around Bitcoin.  Probably the biggest is that if you transact using Bitcoin you can do so with total anonymity.  In the case of Bitcoin Users are confusing anonymity  with pseduonymity.

Part of the design of the blockchain that Bitcoin uses is that every transaction is visible.  The blockchain is highly visible.  The table below shows the very latest blocks that have been accepted into the Bitcoin blockcahin:
This data is drawn from a site called http://blockr.io/ but there are several. If you visit https://blockchain.info/ you can even see the latest transactions that have been submitted but not yet accepted into the blockchain. In most of these online systems you can drill down into each transaction, including the as yet unconfirmed ones.  Here's one chosen at random:

The IP address from which the transaction hails could easily be obscured using something such as Tor, but as discussed previously Tor itself does not guarantee anonymity.

More importantly for those assuming Bitcoin is anonymous, the transactions have an origin and destination public address, and as soon as you have such meta data you can call upon cluster analysis to start to look for associations and correlations.  Using these techniques, combined with the fact that some addresses are well known sources of illegal activity, you can quickly start to build a picture of where "money" is flowing and for what purpose.

Not surprising then that research has been conducted for years using exactly these techniques, primarily to see how Bitcoin was actually being used.  The starting point was back in 2013 when Sarah Meiklejohn et al published their initial findings.

What was perhaps surprising was what the analysis revealed: the vasy majority by quantity (not value) of Bitcoin transactions (in 2012/2013) were for online gambling. 

Graphical Results From Work Of Meiklejohn et al

The technique was also used to analyse some very high profile thefts of Bitcoins.  One was particularly revealing as it showed that following what was billed as a theft by hackers, the Bictoins in question had not left the Bitcoin Exchange: it was shown to be fraud on a grand scale.

You probably also won't be surprised to learn that these techniques have been productised and are in use by law enforcement agencies around the world.  Probably the best tool I have come across is Chainalysis demonstrated live at a recent meetup in New York.



Having played with the tool I was able to track some very interesting activities and, without knowing who was behind the public addresses, it was possible to infer who they were and what they were doing.  Bear in mind that all transaction ever conducted with Bitcoin are kept in the blockchain so you can do some very interesting historical analysis.


And there you have it.  Bitcoin is not anonymous.  It affords a degree of anonymity but being only pseudonymous means that by careful analysis of the meta data it is often possible to track down illegal use of Bitcoins.  The classic policing technique of "follow the money" isn't dead yet.


Monday, 25 January 2016

Perception Of Bitcoin: Not What You Thought

I've written before about crypto currencies such as Bitcoin: I am a self confessed sceptic  However, until now I'd never seen any form of properly peer reviewed study on what general users think of Bitcoin.  Yes it has a market capitalisation of something around $6bn but does that mean it is going to be popular outside of specific niche communities.

The research I recently came across would suggest Bitcoin has a long way to go before it gains the popularity that some would have us believe it has already attained.  The key findings of the study were:
  • People who actively use Bitcoin did not necessarily understand how it operated
  • Bitcoin users overestimated it's ability to protect a users anonymity - many not realising how transparent the transactions are
  • Those who had not used Bitcoin g thought it "too scary" to use
Probably the most ironic finding was that Bitcoin users want government insurance of Bitcoin deposits.  One of the biggest selling points for Bitcoin was that it was decentralised and had no government involvement. 

If you put all of these findings together you do find yourself asking why people are using Bitco9in at all.  Some cadres of user we know use it to obscure transactions: the findings in the iOCTA report of 2015 were that 40% of criminal to criminal transactions within Europe used Bitcoin somewhere. But that leaves a lot of users who are using it for non-criminal purposes, many of whom appear to have based their understanding on high level descriptions or biased reporting. 

Even those involved in developing Bitcoin appear to think it is on its last legs.  But, despite my scepticism I'm not sure. I've heard others say it is doomed and will fail imminently - they've been saying that since it was first mooted in 2008/9.

If you are going to trust your personal wealth to a technology I would strongly advise that you have an understanding of that technology.  After all it's the technology that you have to trust as their are no people, no centre, no bank to whom you can turn.

If you are looking for a good introduction I would strongly recommend this video - it's 25 minutes of your time well spent:





Predicting the future is always dangerous, but even if those that say Bitcoin is dying are correct I suspect the other forms of cryptocurrencies (and there were 595 the last time I looked) will emerge to compensate for any perceived problems with Bitcoin. 

Bitcoin always was something of an experiment so it wouldn't be surprising to see it evolve, but the block chain technologies underpinning Bitcoin are so useful that I can't see it disappearing completely.

Friday, 22 January 2016

Unpredictability As A Security Measure

I'm not sure if this is a good idea or not yet, in practice. But, the concept is quite fascinating. In essence, malicious software assumes your computer will operate in a certain way so why not confuse it and be unpredictable.

The idea is being worked upon by Daniela Oliveira, amongst others. The paper that she wrote, and which was presented at last year's USENIX but I'm only just coming around to reading up about it.  I'm surprised I'd not heard of it before.

The principle has some very sound basis in military strategy. The Art of War (孫子兵法) by Sun Tzu has many suggestions on how to outwit your enemy which would appear to be quite applicable here, if they can be made to work in practice.

I understand that Prof Oliveira is working on an operating system called Chameleon in which she and her colleagues aim to encompass the principles set out in the paper and presentation.

We've already seen some of the ideas suggested for Chameleon in Honeypots.  However, in addition to allowing malware to operate in a façade environment whilst the system collects data about the malicious software, Chameleon looks as if it goes further by having common operating system functions respond in unpredictable ways. 

And its at that point any operating system designer would throw their hands up and say that this is going to make the operating system unusable: the very essence of improved operating system design is to have it behave as predictably as possible, even when the inputs are slightly unpredictable.  The concept of perturbing system calls in the kernel of an operating system literally doesn't compute for most of us.

But, the initial research conducted does suggest that the security gains from such an approach are quite considerable so it is well worth further study to see if a suitable level of trade off can be found.

Having only just found the concept I've no idea if this approach will be practical, but its certainly an area of research to keep an eye on, and I look forward to seeing Chameleon in action.

Thursday, 21 January 2016

Do We Need An Internet Bill Of Rights?

Today I took part in a BBC World Service discussion about the Internet Bill of Rights.  This is something Sir Tim Berners-Lee has been proposing for some time now:

Following the leaks by Edward Snowden, the proposal has gained a vocal group of supporters, and some countries have gone so far as to begin forming legislative frameworks that they believe should be enacted: most notably Brazil's Marco Civil da Internet and Italy's Declaration of Internet Rights.

My own views are very much coloured by the fact that I was raised in a western, liberal democracy.  I think it is a lovely idea, in principle, although some of the specifics I do have an issue with. But even if you were to agree wholeheartedly with the idea I can't see how it will ever happen in practice.

That's not to say that it has to be implemented everywhere.  It could be considered a "gold standard" such as was the case with the UN Declaration of Human Rights.  Something we all agree to as an ideal towards which the world should work. The UN Declaration led to the European Convention on Human Rights which became law in the EU's member states through acts such as the British Human Rights Act.

But those countries/regimes which we most hope would change to observe the principles in, say, the UN Declaration are the very ones who have studiously ignored it.  Some have gone further and criticised it as being counter to their laws and customs.  And so it will be with any Internet Bill of Rights. 

Trying to agree who should govern the Internet has proven to be an intractable problem.  Some countries quite understandably have raised concerns that the United States still effectively controls the fundamental bodies that govern the Internet technically.  However, every attempt to agree who should take over the governance is has seen endless conferences come and go with no agreement.  Why? Because everyone has a different idea of how it should be governed.  There are bodies such as the International Telecommunications Union which seem ideal for taking on the task but the tricky part has been finding a framework that all find acceptable.

So what happens instead?  Well, it's grown up organically as a multi-stakeholder approach where everyone uses the same technical standards and follows the same conventions because they work, and the parties want to interoperate.  The clue is in the name: Internet.  The structures that underpin the web are not a single monolithic network but rather a series of interconnected networks run by different countries according  to different laws, but using a unified set of technical standards.

And it is important to remember that the Internet is not some ethereal entity that runs autonomously on fresh air.  It is a set of wires, fibres, routers and servers that have to be paid for and operated by someone.  The Internet touches earth in countries just like the telephone system, and whilst we can make an international call do we expect the same laws to apply to our call regardless of which countries the participants sit?

Bearing in mind that the Internet is governed by the laws of the country in which it operates, and remembering just how long it has taken for the UN's Declaration on Human Rights to become law even in like minded democracies, would we not be better avoiding a whole new Bill of Rights and trying to focus on the existing UN Declaration.  After all the declaration applies to the Internet in the same way as it applies to all other aspects of our lives.  Trying to argue that the Internet is somehow a special case is unlikely to win as an argument, and I suspect we're better off trying to win the battle we have already begun rather than open up a whole new front.

I would suggest we stick with the current multi0stakeholder governance model which, whilst not perfect, works, and keep up the fight to have the UN Declaration of Human Rights applied by all nations to all aspects including the Internet.

Wednesday, 20 January 2016

Internet of Things Blossoming In 2016

2016 is definitely looking like the year the Internet of Things (IoT) enters the mainstream.  At the University of Surrey we have been selected to be part of the PETRAS consortium.

PETRAS is an interdisciplinary research hub aimed at maximising the economic and societal opportunities of the IoT, and is part of IoTUK.  In addition to Surrey, a prestigious list of UK universities are in the consortium: UCL (leading the consortium), University of Oxford, University of Warwick, Lancaster University, University of Southampton, University of Edinburgh and Cardiff University.

We will drive two projects:
  • Privacy and security for Connected Autonomous Cars; and
  • Security and key management for smart meter application.
The University’s 5G testbed – the only one of its kind in the UK – will be a key part of the consortium's projects.

Watch this space where I will talk more about our projects as they start to produce results.

Tuesday, 19 January 2016

Why Do So Few Use Security Headers?

In recent months I've become increasingly perplexed as to why so few websites are employing security headers.  They are not a panacea but the security benefits from their use are so large, and the effort required to employ them so small, that I can't see why they are not on the majority of sites that have data input fields..

One very recent blog entry by Paul Moore brought this into stark relief when he reported on a cross site scripting problem on ASDA's website.  The issue demonstrated in his video shows just how easily failure to conduct field validation can be exploited, and in this case with a particularly troubling persistent XSS:



All developers make mistakes and we all forget to add checks.  Either through ignorance or forgetfulness there are many entry fields on websites that do insufficient data validation.  And, on a large, complex website it is all the more likely that such mistakes will be made as different elements are delegates to small groups, although you would hope that it would be picked up in the checking process before going live.

This tendency makes it all the more effective for websites to add the security headers that would then mitigate any attempts by a hacker from exploit such a vulnerability.  These headers can be added centrally and affect the webpage functionality minimally. Even if there is a failure it tends to be graceful, and it fails rather than process malformed data input.

I have heard some say that these headers are a case of "belt and braces" so not really required.  But, if you forget to put on your belt it is good to be wearing braces.  And, if you are dealing with peoples sensitive personal data then it really behoves you to take all the protections you can.

As a user you can check if a website is employing these security headers by using the web site securityheaders.io If a website doesn't use headers then it doesn't mean they are vulnerable to a XSS attack but it should cause you to think carefully about what data you give them.  I'd also recommend that to be safe you have only the one tab open in your browser just in case they have forgotten to put on some piece of data validation.

Hardening your websites response to these requests is not difficult to do.  There is simple to follow advice and plenty of cheat sheets.  You'd imagine that this would mean the majority of websites would employ this useful measure. Not so.  I was genuinely shocked when, last year, I read the results of some work that Scott Helme did to see how widespread these security headers were being used - he scanned the top 1 million most visited websites and found only a fraction of a percent were using these headers: a few hundreds out of the millions!

Results from Scott Helme Study


Some of these omissions are trivial, and some of the newer headers one can understand might not be used.  Likewise those sites that have no input fields and will never be susceptible to XSS probably won't suffer through an absence of these headers.  But out of the 1 million sites I can't believe that only a few hundred accept data input.

If you're running a website it really is in your own interest to understand this old, well understood vulnerability, and to use these simple solutions to prevent hackers intercepting your users sensitive inputs.




Sunday, 10 January 2016

How Big Can A DDoS Attack Be

On New Year's Eve 2015 the BBC's web domain was subjected to a DDoS attack.  It did cause significant disruption, and it was noticed by many users who took to social media in something of a mild panic. 

The attack interested me not just because the BBC was an unusual target but more particularly for what then followed: those claiming they were the attackers communicated with the BBC technology journalism team.

Part of that communication claims that the attack reached 600 GB/s data rates.



The largest data rates we have seen in DDoS attacks previously were something like 330 GB/s, which occurred when SpamHaus was attacked in 2014.  These sort of rates are extraordinary.  The only way so far found to mount these attacks are using reflection/amplification attacks such as I have described here previously (unless someone can tell me differently). The tool being used by these attackers was claimed to be BangStresser.  It even had its own website, and in an ironic twist was protected against DDoS attacks by Cloudflare.  Recently it has been taken down.

But, even with the most productive reflection attacks (DNS and NTP) just how high can these data rates go?  Some "back of the envelope" maths suggests that to reach 600 GB/s the attackers would have to be using a number of servers that I find it difficult to believe could be simultaneously engaged.

What is more, in their message to the BBC the attackers state that this was just a test: apparently one that got out of hand.  If they have found a way to mount DDoS attacks of this scale then this is something that we all need to take note of.  What would be really useful would be if the attackers provided further evidence as they seemed to suggest they would.  They claim that part of their success is due to using Amazon servers but that is really very surprising as Amazon claim to have facilities in place to prevent just this sort of misuse.

Meanwhile, DDoS is firmly back on the agenda.  With 3DoS and apparently increasing volumes signalling that this is not a form of attack that we yet have under control.

Update: 27/01/2016 A report out today from Arbor networks documents attacks increasing as discussed above and having reached speeds of 500 GB/s.