Unless you opt out, Windows 10 will by default share your Wi-Fi network password [hashed] with any contacts you may have listed in Outlook and Skype — and, with an opt-in, your Facebook friends!
The Good Old Keyboard Days
I was struggling to correct the devastating mistake of capslock on the latest (and spectacularly awful) Gnome and I realized why I feel so strongly about this.
It also explains why I can’t type ampersands and parentheses as well as other characters. I have also been known to hesitate typing equals. I wonder how long it would take for me to be comfortable typing on that again?
I don’t go crazy insisting that modern Tab keys be replaced with Esc. That’s a bit of a sanity check which reassures me that putting a Ctrl to the left of the A key is truly the correct thing to do, at least for programmers. And I don’t even use Emacs! NO, I NEVER DO GET NOSTALGIC FOR TYPING IN ALL UPPER CASE.
I don’t think I really have anything to add to Brian Krebs' article on the new Windows 10 wifi password "feature".
Wow. Sounds like Microsoft is very unclear about whom this kind of shared secret should be shared with.
I guess the consequence of this will be plausible deniability for everyone. Let the file sharing/trolling begin! Or am I missing something?
I almost never deal with Windows, Facebook, Skype, or Outlook so I’ll be mostly a spectator on this one, but it’s still pretty disturbing.
Compiles... Runs... Works!! OMG!!!
After studying OpenCV for so long it was very depressing to get stuck on the starting line by orthogonal v4l codec issues. Finally I prevailed and was able to write a C program that takes raw input off of my PlayStation Eye web cam and lets me process and/or save it.
Often in programming crossing the starting line is half the journey.
To celebrate I made a funny animated GIF of that moment every serious programmer knows when some recalcitrant code finally compiles and works.
This was done using my aforementioned capture program plus avconv, and Imagemagick (documented in my notes of course).
Hover over the image to animate or click here.
What a relief! I am so ready to move on!
Defense Against The Dark Arts
In my last post I pointed out that trusting a big company’s cloud service is no worse than trusting the same big company’s locally run software. Assuming a situation where we might wish to not fully trust anyone, an astute reader asked about the implicit trust we give to our hardware manufacturers.
The specific concern was that a company like Intel, ARM, or AMD could subvert physical CPUs to unnaturally cooperate with an attacker. I immediately thought of a system where a magic value stored in memory or a register triggered arbitrary execution or privilege escalation. I also thought of subverting the PRNGs as a likely target for this kind of attack. I think such a thing is definitely possible. There are many good resources about cpu backdoors that would corroborate such a belief. This Wikipedia article on shenanigans involving Dual Elliptic Curve randomness generators and the NSA makes it pretty clear that this isn’t the kind of threat that’s in the same category as, say, aliens from space beaming thoughts into your head which make you "accidentally" delete your PowerPoint slides.
I would personally say that the reason this attack is unlikely to be widely problematic in the wild at this time is that there are so many much easier ways for dedicated attackers to compromise systems. But imagine a world where everyone goes Stallman and insists on a certain level of maximal transparency. (Uh, let’s not dwell on Ken Thompson’s issue of trusting trust - let’s assume, like Stallman and Thompson, we can write everything in op codes from scratch.) The opacity of the hardware layer still would pose a problem. What could possibly be done to ameliorate this class of threat?
I can think of two things. The first is pretty obvious - carefully check stuff. I think this is one of the reasons why a poorly executed hardware attack would be doomed. Someone somewhere would have some weird use case that gets the "wrong" answer. They would wonder, they would post about it, and it would work its way up to security researchers who would delight in isolating the problem. We saw how this would work with a simple but subtle error in mid 1990s Intel CPUs. But as sophistication goes up, mechanisms to obscure such replay attacks (against the hardware exploit) can be imagined.
With that in mind and the fact that pretty much any hardware can be subverted (memory, motherboard bridges, bus controllers, ethernet controllers, et al.) defending against this kind of thing is no small problem. My second approach would be to use a distributed VM. Is this wildly complex? Yes. Practical? Probably not. Completely effective? I don’t think that’s possible really. But it could add so much entropy into what was happening at a low level to produce the genuine results you really wanted that low level corruption of the transistor logic simply is not a good attack. I feel like a misbehaving CPU would simply cause errors for a distributed VM system more than it would successfully attack the user level applications. This might suffice for a denial of service attack. Of course, I could be flagrantly wrong about this and it’s already rather impractical anyway.
Without much more to say, I’ll conclude with a link to a video of Jeri Ellsworth making a batch of microchips in her kitchen. And for the rest of us, a nice instructional video on making very stylish tin foil hats. Aluminum foil actually; tin forms highly toxic stannanes. Which is a reminder that there’s always something out to get us!
Partly Cloudy Locally
Every month (since at least 10 years ago), I have read Bruce Schneier’s CRYPTO-GRAM newsletter. If you’re a security professional of any kind the only excuse not to be doing this is if you already know everything he writes about and it’s pretty safe to assume you don’t. With this in mind, it’s not every day that hubris gets the better of me such that I am ready to completely repudiate Schneier’s wisdom on a rather important and topical security issue. However, today is that day.
In a series of articles in the Economist, Schneier asks the question "Should Companies Do Most Of Their Computing in the Cloud?" Since I am a bespoke cloud computing craftsman you may think my arguments are similar to the normal ones that the "non-cloud" partisans support. (Which Bruce competently covers in the articles.)
No. Not at all. I’m actually pretty sympathetic to cloud advantages. As you’ll see, it’s probably better than a normal local set up. In this entire debate, I feel that both sides (cloud is good/could is bad) have largely missed the most glaring and important security issue. Interestingly I’ve felt this way for nearly 20 years, since "cloud" was still a weather feature. With the exception of a negligible number of insane people I’ve never found anyone who seems to have given my perspective any thought at all. That’s why I feel it might be good to clearly state my personal rule of cloud security.
If you can not audit the software you use for privileged tasks and you connect that system to the internet then your system is potentially as insecure as possible.
I’m not quibbling with a detail here. Bruce Schneier is wrong. Let me demonstrate the absurdity of the current argument by using Schneier’s own computer habits. Here Bruce provides a pretty bog standard run down of "cloud is bad" thinking.
In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer — I am one of the last Eudora users — and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.
My cloud computing avoidance closely follows his. The problem here is that if you think it’s important to improve security by doing things this way you can not use an operating system like Microsoft Windows. (Or OS X). If you use a proprietary operating system you have completely failed at the objective of not trusting the exact same companies that you would need to trust to use the cloud. Notice I’m not advocating for one thing or the other. I’m just pointing out that the security concerns about trusting the cloud are nothing new. If you didn’t feel the need to scrutinize your dependence on proprietary software, then congratulations! You don’t need to worry about cloud security either. It couldn’t possibly be worse.
Interestingly Schneier knows he’s wrong. In the same CRYPTO-GRAM he quotes, without argument, Micah Lee who points out what has always been obvious to me.
Whatever you choose, if trusting a proprietary operating system not to be malicious doesn’t fit your threat model, maybe it’s time to switch to Linux.
If you can’t trust Azure to safely do whatever you want done with your data, you can’t trust Windows itself for the exact same reasons. But it’s not merely an equivalent threat. Your Windows (or OS X) non-cloud local system is worse against the threat of a compromised or untrustworthy service provider. Yes worse. The reason is the same answer to why criminals would much rather break into my Linux cluster to mine BTC than to physically break into the supercomputer center and haul it away in a truck. Why would the perpetrator want to pay for electricity, hardware, facilities, etc? If the NSA wanted all data, it’s far more efficient for them to let you host it. All they would need is a key to pop in to your computer at any time. Are you sure there is no back door on your computer? For me, that’s a theoretically testable hypothesis. I fear that for Schneier it’s magical thinking.