News

Published on July 4th, 2019 📆 | 1921 Views ⚑

0

Why are they “smart” locks if more money buys you less security? – Naked Security


https://www.ispeech.org/text.to.speech

We’ve written about so-called digital padlocks before, usually not very enthusiastically.

That’s because we’ve usually been reporting on some sort of cybersecurity blunder that has made these locks very much less secure than their owners probably thought.

To be fair, a lot of conventional padlocks and many door locks aren’t super-secure either, and can be picked fairly easily by a practised crook.

And many traditional locks can simply be chopped or smashed open with bolt cutters or a sledgehammer.

Safe enough?

Yet we carry on using old-school padlocks, bike locks and door locks because they’re generally “safe enough”.

One reason is that reputable vendors are generally willing to be honest about what “safe enough” means, and well-informed staff in reputable shops (your mileage may vary if you buy online) know enough to give you a frank assessment of how much effort a thief would need to put in to bypass the various products they sell.

Even if those assessments are anecdotal, they’re often easy to understand and useful.

For example, if someone advises you that a cheap wire lock will keep a 12-year-old bike thief with a decent pair pliers at bay for about a minute or two, while a hardened steel D-lock will defeat many small bolt cutters but yield readily to an adult with an angle grinder…

…well, that sort of information is pretty useful in assessing which brand and type of lock to use when and where.

If you know the sort of tools, time and strength a crook would need to cut your lock off, then you can envisage the sort of unwanted attention they might attract, the noise they might make, and the likelihood that they’d get away before you noticed and firghtened them off.

But when you add the moniker “smart” to a lock, meaning that it can be unlocked not only in a conventional sort of way using a physical key, but also by typing in a PIN or by using some sort of mobile phone app…

…well, then it’s much harder to decide if the lock provides a whole lot of extra security for the (often much) higher price, or if the extra money that you’re paying paradoxically leaves you with a lock that is significantly easier to open unlawfully.

As we reported last year in the case of a Canadian lock imaginatively called the Tapplock (it had a fingerprint sensor so you could literally tap-to-open), researchers were able to write an app that would silently open any Tapplock in just 2 seconds.

Even the genuine app needed 0.8 seconds to open a specific lock that it had been paired with.

Subsequently, another researcher found that you didn’t need to hack a Tapplock live in situ by figuring out its passcode – you could exploit a security bug in the company’s cloud service to download everyone’s personal data before you even set out.

You didn’t even need to extract someone else’s passcode because you could add yourself as an “authorised user” of other people’s locks – a trick that would make it much harder for anyone, including law enforcement or the legitimate owner, to challenge you if you were caught opening someone else’s lock.

Worse still, Vangelis Stykas found that the database that could be leeched from the company’s central servers often included geolocation data giving the places where the lock had recently been used – essentially telling you where to look, as well as letting you in when you got there.

Gone in N seconds

Pen Test Partners (PTP), the company that produced the “gone in 2 seconds” app we mentioned above, recently wrote up another story to reminds us all that more – as in price and features – may still mean less in the world of “smart” locks.

Not that this means you should avoid smart locks altogether, of course – we’re not that uncharitable – but that the security you see on the surface often gives you nothing much to go on.

Simply put, the “heft test” that gives you a hint of how robust an old-school lock is likely to be just doesn’t work in the digital era.

For example, PTP recently looked at a product called the Ultraloq, which is a door lock with a keypad, a fingerprint reader and a Bluetooth module added.

It’s promoted as a great way of dealing with doorways where you need to let guests or delivery people in and out on a regular basis, so they can get in today but not tomorrow, for example.





Digital locks are, if the truth be told, a great way of dealing with guest access, which is why almost every hotel in the developed world hands out card keys these days, instead of actual physical keys.

Guests don’t cost you money and reduce security if they forget to return their key; you don’t need to label every key with a huge privacy-sapping tag with the room number printed on it; and you can revoke access easily in the event of trouble.

But PTP found a number of blunders in the Ultraloq to suggest that programers in the the world of smart locks are still prone to making the sorts of coding blunder that the rest of us learned to avoid (or were forced to avoid because of public scrutiny) many years ago.

Good and bad news

One of Ultraloq’s blunders has, happily, now been fixed: according to PTP, you could use the company’s cloud service to pull off the same sort of attack that Vangelis Stykas found last year in the Tapplock case.

User IDs could be extracted via the company’s web interface without any authentication, and, worse still, those IDs seemed to be sequential numbers, so you could not only guess a valid user ID but then “calculate” all or most of the rest simply by adding (or subtracting) 1.

Ultraloq has updated its API to avoid these mistakes, but guessable or sequential IDs and unauthenticated access to personal data are coding blunders that no web programmer should be making in the year 2009. (Sorry, 2019.)

Another problem that PTP found is that although Ultraloq used encryption on the contents of Bluetooth traffic between the user’s app and the lock itself, and although the encryption kept plaintext PINs out of radio sight, it didn’t actually provide the security it was supposed to.

Encrypting authentication traffic is important to prevent passwords leaking directly into the ether, but in the case of a digital doorlock, encryption must also protect the door against unauthorised access, whether the crooks has figured out your actual password or not.

According to PTP, the Bluetooth encryption relies on a secret that the unlocking app has to request from the lock, combined with a secret baked into the app itself.

The app’s secret is hardwired and can be read out from the app (PTP listed it; we shan’t repeat it here), so it’s not actually a secret at all.

The other half of the key – PTP calls it “the token” – can be obtained from the lock via a special Bluetooth request.

The problem with this sort of approach is that if you’re nevertheless relying on authentication packets that contain a six-digit PIN, there are still only 1,000,000 different PINs to try, and 1,000,000 different encrypted authentication packets will run through them all.

If you make the Bluetooth-requestable token the same for every lock, then the task is trivial – a crook could generate all 1,000,000 possible “let me in” packets up front, and anyone could use them any time they liked.

If you make the token different for every lock, but constant once the lock is in service, the task is still trivial – a crook could generate all 1,000,000 possible “let me in” packets for a specific lock after requesting the lock’s unique token.

From PTP’s description, which suggests that the packet encryption uses AES on a 6-digit PIN with a lock-specific token, this pre-calculation would typically take seconds.

And even if you make the token different for every authentication request, a naive implementation that relies on straight packet encryption still only has 1,000,000 different PINs to check.

What to do?

We’ll stick to short and simple advice here: if you’re keen on adopting smart locks, use your own web search smarts to find a device that has not only been subjected to reputable, independent penetration tests, but also come out with a positive recommendation.

Don’t rely on “positive reviews” on the vendor’s own website; avoid any “reviews” that are posted on the site where you got the smart lock’s app; and avoid “reviews” from lifestyle publications that review cool products if they make any cryptographic claims. (Being a cryptanalyst is hard!)

Smart lock vendors mean well, but at the price points they’re typically aiming for, many of them just aren’t cutting the cryptographic mustard yet, and are cutting cryptographic corners instead. (Being a cryptographer is hard.)

Don’t be afraid to vote with your chequebook – or your NFC-enabled credit card – and treat cybersecurity as something that is a value to be maximised, not a cost to be kept as low as possible.


Source link

Tagged with:



Comments are closed.