[go: nahoru, domu]


  Download PDF version of this article PDF

Lessons from the Letter

Security flaws in a large organization

Dear Readers,

I recently received a letter in which a company notified me that they had exposed some of my personal information. While it is now quite common for personal data to be stolen, this letter amazed me because of how well it pointed out two major flaws in the systems of the company that lost the data. I am going to insert three illuminating paragraphs here and then discuss what they actually can teach us.

"The self-described hackers wrote software code to randomly generate numbers that mimicked serial numbers of the AT&T SIM card for iPad—called the integrated circuit card identification (ICC-ID)—and repeatedly queried an AT&T web address."

This paragraph literally stunned me, and then I burst out laughing. Let's face it, we all know that it's better to laugh than to cry. Unless these "self-described hackers" were using a botnet to attack the Web page, they were probably coming from one or a small number of IP addresses. Who, in this day and age, does not rate limit requests to their Web sites based on source IP addresses? Well, clearly we know one company that doesn't. It's very simple: if you expose an API—and a URL is an API when you're dealing with the Web—then someone is going to call that API, and that someone can be anywhere in the world.

A large company doing this is basically begging to be abused: it's not like you're just leaving your door unlocked, it's like a bank letting you try 1 million times to guess your PIN at the ATM. Given enough time—and computers have a lot of time on their hands—you're going to guess correctly eventually. That's why ATMs DON'T LET YOU GUESS a million PINs! All right, in this case the company was not going to lose money directly, but it certainly lost a good deal of credibility with its customers and, more importantly, possible future customers. Sometimes brand damage can be far worse than direct financial damage.

Now we come to the next paragraph, in which the company admits to not having proper controls over its own systems:

"Within hours, AT&T disabled the mechanism that automatically populated the email address. Now, the authentication page log-in screen requires the user to enter both their email address and their password."

 "Within hours?!" Are you serious? At this point I was laughing so hard it hurt, and my other half was wondering what was wrong with me, since I rarely laugh when reading the mail. The lesson of this paragraph is to always have the ability to kill any service that you run, and to be able to either roll forward or roll back quickly. In fact, this is the argument made by many Web 2.0, and 1.0, and even 0.1 proponents: that, unlike packaged software, which has release cycles measured in weeks and months, the Web allows a company to roll out changes in an instant. In geological time, hours might be an instant, but when someone is abusing your systems, hours are a long time—long enough, it seems, to acquire several hundred thousand e-mail addresses.

Finally, in the next paragraph we find that someone at AT&T actually understands the risk to its customers:

"While the attack was limited to email address and ICC-ID data, we encourage you to be alert to scams that could attempt to use this information to obtain other data or send you unwanted mail. You can learn more about phishing at www.att.com/safety."

I somehow picture a beleaguered security wonk having to explain, using very small words, to overpaid directors and vice presidents just what risk the company has exposed its users to. Most people now think, "E-mail address, big deal, dime a dozen," but of course phishing people based on something you know about them, like their new toy's hardware ID, is one of the most common form of scams.

So, some simple lessons: rate limit your Web APIs, have kill switches in place to prevent abuse, have the ability to roll out changes quickly, and remember to hire honest people who can think like the bad guys, because they are the ones who understand the risks.

One other thing is for sure, this letter's a keeper.

KV

KODE VICIOUS, known to mere mortals as George V. Neville-Neil, works on networking and operating system code for fun and profit. He also teaches courses on various subjects related to programming. His areas of interest are code spelunking, operating systems, and rewriting your bad code (OK, maybe not that last one). He earned his bachelor's degree in computer science at Northeastern University in Boston, Massachusetts, and is a member of ACM, the Usenix Association, and IEEE. He is an avid bicyclist and traveler who currently lives in New York City.

© 2010 ACM 1542-7730/10/0700 $10.00

acmqueue

Originally published in Queue vol. 8, no. 7
Comment on this article in the ACM Digital Library





More related articles:

Jinnan Guo, Peter Pietzuch, Andrew Paverd, Kapil Vaswani - Trustworthy AI using Confidential Federated Learning
The principles of security, privacy, accountability, transparency, and fairness are the cornerstones of modern AI regulations. Classic FL was designed with a strong emphasis on security and privacy, at the cost of transparency and accountability. CFL addresses this gap with a careful combination of FL with TEEs and commitments. In addition, CFL brings other desirable security properties, such as code-based access control, model confidentiality, and protection of models during inference. Recent advances in confidential computing such as confidential containers and confidential GPUs mean that existing FL frameworks can be extended seamlessly to support CFL with low overheads.


Raluca Ada Popa - Confidential Computing or Cryptographic Computing?
Secure computation via MPC/homomorphic encryption versus hardware enclaves presents tradeoffs involving deployment, security, and performance. Regarding performance, it matters a lot which workload you have in mind. For simple workloads such as simple summations, low-degree polynomials, or simple machine-learning tasks, both approaches can be ready to use in practice, but for rich computations such as complex SQL analytics or training large machine-learning models, only the hardware enclave approach is at this moment practical enough for many real-world deployment scenarios.


Matthew A. Johnson, Stavros Volos, Ken Gordon, Sean T. Allen, Christoph M. Wintersteiger, Sylvan Clebsch, John Starks, Manuel Costa - Confidential Container Groups
The experiments presented here demonstrate that Parma, the architecture that drives confidential containers on Azure container instances, adds less than one percent additional performance overhead beyond that added by the underlying TEE. Importantly, Parma ensures a security invariant over all reachable states of the container group rooted in the attestation report. This allows external third parties to communicate securely with containers, enabling a wide range of containerized workflows that require confidential access to secure data. Companies obtain the advantages of running their most confidential workflows in the cloud without having to compromise on their security requirements.


Charles Garcia-Tobin, Mark Knight - Elevating Security with Arm CCA
Confidential computing has great potential to improve the security of general-purpose computing platforms by taking supervisory systems out of the TCB, thereby reducing the size of the TCB, the attack surface, and the attack vectors that security architects must consider. Confidential computing requires innovations in platform hardware and software, but these have the potential to enable greater trust in computing, especially on devices that are owned or controlled by third parties. Early consumers of confidential computing will need to make their own decisions about the platforms they choose to trust.





© ACM, Inc. All Rights Reserved.