Lessons from LinkedIn Intro for product security teams


That was quick: the controversial LinkedIn Intro feature– which effectively performed a man-in-the-middle attack against user email on iOS devices– has been scrapped unceremoniously less than three months after its introduction. It was a quiet farewell to a feature announced with great fanfare in a blog-post trumpeting the impressive engineering feats involved: install LinkedIn as email proxy to intercept and capture user email, route all incoming messages to LinkedIn servers in the cloud all for the purpose of annotating them with LinkedIn profile information about recipients and deliver it back into the inbox. Amidst the predictable chorus of I-told-you-so and schadenfreude is a question that is rarely raised: Why did this feature ship at all in the first place? More importantly from the perspective of security professionals: what was the involvement of LinkedIn security team in its conception, design and implementation? What lessons can we draw from this debacle for avoiding reputational risk with ill-conceived features damaging user trust?

Much of what follows is qualified by the caveat that this blogger is not privy to the internal deliberations that may have taken place at LinkedIn around Intro. For that reason we can not distinguish between whether this is a failure in the process for security assurance (as such, entirely avoidable) or simply well thought-out business decision that simply proved incorrect hindsight.

Moving security upstream

In companies with an immature approach to information security, the bulk of the activity around managing risks appear towards the end of the product cycle. At that stage the product requirements have been already cast in stone, technical architecture well-defined, specifications written– to the extent that anyone writes specs in this day and age of agile methodologies– and even the bulk of the implementation is completed. Into this scene steps in our security engineer at the eleventh hour to conduct some type of sanity-check. It may range from a cursory high-level threat model to low-level code review or black-box testing. That work can be extremely valuable depending on the amount of time/skills invested in uncovering vulnerabilities. Yet the bigger problem remains: much of the security battle has already been won or lost based on decisions made long before a single line of code is written.  Such decisions are not expressed in localized code chunks that can be spotted by the security reviewer scrutinizing the final implementation of a finished product. Their implications cut across feature lines.

Beyond whack-a-mole

Consider the way web browsers were designed before IE7 and Chrome: massively complicated functionality– HTML rendering and JavaScript execution– subject to attacks by any website the user cared to visit, contained in a single process running with the full privileges of the user. If there was a memory corruption vulnerability anywhere in those millions of lines of code (guaranteed to be present statistically speaking), the attacker exploiting that gets full control of the user’s account– and often the entire machine, since everyone was running with administrator privileges anyway. There are two approaches to solving this problem. First one is what MSFT historically tried: throwing more security review time at the problem, trying to find one more buffer-overrun. While this is not entirely a game of playing whack-a-mole– a downward trend in bugs introduced vs. found can be achieves with enough resources– it does not solve the fundamental problem: memory corruption bugs are very likely to exist in complex software implemented in low-level languages. At some point diminishing returns kicks-in.

The major security innovation in IE8 and Chrome was the introduction of sandboxing: fundamentally changing the design of the web-browser to run the dangerous code in low-privileged processes, adding defense-in-depth to contain the fallout from successful exploitation of a vulnerability that somehow eluded all attempts to uncover it before shipping. This is a fundamental change to the design and architecture of the web browser. Unlike fixing the yet-another-buffer-overrun problem, it was not “discovered” by a security researcher staring at the code or running an automated fuzzer. Nor is it implemented by making a small, local change in the code base. It calls for significant modifications across the board– pieces of functionality that used to be reside locally in memory, now are managed by another process, subject to security restrictions around access. Because of far-reaching consequences, security features such as sandboxing must be carefully accounted for in the product plans from the beginning, as part of a fundamental design criteria (or even more difficult re-design exercise, in case of Internet Explorer, saddled with a legacy 10-year old architecture.)

Security and development lifecycle

Introduction of sandboxing in web browsers is one example of the more general pattern that security assurance has greater impact as it moves upstream in the development life-cycle. Instead of coming into evaluate and find flaws in an almost-finished product, security professionals participate in architecture and design phases. This also reduces costs for fixing issues: not only are some flaws easier to spot at the design level, they are also much cheaper to fix before any time has been wasted on writing code. There is no reason to stop at product design either: even before a project is formally kicked-off, security team can provide ad hoc consulting, give presentations on common security defects or develop reference implementations of core functionality such as authentication libraries to be incorporated into upcoming products. But the ultimate sign of maturity in risk-management is when security professional have a voice at the table when deciding what to ship. Incidentally having a vote is not the same as having veto power; risk management for information security is one of many perspectives for the business.  A healthy organization provides an avenue for the security team to flag dubious product ideas early in the process before they gain momentum and acquire the psychological attachment of sunk-costs.

Whither LinkedIn?

Returning to the problem of LinkedIn Intro, the two questions are:

  • Does LinkedIn culture give the security team an opportunity for voicing objections to reckless product ideas before they are fully baked?
  • Assuming the answer is yes, did the team try– if unsuccessfully– to kill Intro in an effort to save the company from much public ridicule and embarrassment down the line? If yes, the security assurance process has worked, the team has done its job and the failure rests with decision-makers who overruled their objections and green-lighted this boondoggle.

We do not know the answer. The public record only reflects that the LinkedIn security team attempted to defend Intro, even going so far as to point out that it was reviewed by iSEC Partners. (Curiously, iSec itself did not step forward to defend their client and their brilliant product idea.) But that does not necessarily imply unequivocal support. This would not be the first time that a security team is put in the awkward position of publicly trying to defend a questionable design they unsuccessfully lobbied to change before.

CP

One thought on “Lessons from LinkedIn Intro for product security teams

  1. Sean Ansari says:

    Hi I really liked your post on nfcee_access.xml file content. Thanks for pointing out the confusion between keys and certificates. I updated this file in my phone, but then I got other strange errors like not being about to find main. I don’t know if the source of that error is due to my file not being updated properly, or some other issue. Do you know of a simple app that just tests access to SE? Regards- Sean   Your time is limited, so don’t waste it living someone else’s life. Don’t be

    trapped by dogma – which is living with the results of other people’s thinking.

    Don’t let the noise of other’s opinions drown out your own inner voice. And most

    important, have the courage to follow your heart and intuition. They somehow

    already know what you truly want to become. Everything else is secondary.

    Steve Jobs

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s