Skip navigation

I will start with two assumptions:

1) Online file storage and transfer is cheap, and it’s getting cheaper every day.

2) Generally, the longer a password is, the more secure it is, because it takes longer for attackers to brute force any stolen password hashes. [1]

So why can’t we replace this…

… with this?

Details

Instead of worrying that your 10, 15, or 25 character password is long enough, what if you could upload a 1MB authentication file containing about 1,048,576 characters?  How long would it take an attacker to brute force any stolen password hashes, in that scenario?  (With today’s advances in modern computing power, I’d argue it would still be orders of magnitude better than today’s typical secure password.)

Most password managers (e.g., 1Password, KeePass) can auto-generate unique, strong passwords with sufficient entropy — why not extend those mechanisms to auto-generate unique 1MB authentication files per online service, instead?

Yes, this proposal places more burden on online service providers to process a larger quantity of initial authentication data (1MB instead of ~20 characters), but this additional processing cost is typically tiny compared to all of the data users upload (e.g., photos, videos) to a typical service after successful authentication.

Why are we even still using passwords?

Sometimes, information security improvements are more evolutionary than revolutionary — for good reason.  Retooling infrastructure to support something completely different is costly, and various human factors (e.g., users resistant to change) stifle overall adoption of the new security measures.

Passwords are a single factor authentication (1FA) mechanism.  This concept has been around for 30+ years within computer security.  People are used to it by now.  Yet most passwords are considered woefully insecure for modern, secure online transactions.  People have to remember their passwords, and so they naturally choose predictable ones.

Password Revolution

Then came the idea of two-factor authentication (2FA) or multi-factor authentication (MFA).  It is generally agreed to be more secure, because even if an attacker manages to guess your password, they would need access to all other factors (e.g., keyfob, time-based token) to gain entry.

However, 2FA is hard for the average online user to setup and use, so service providers have resorted to using “weaker” second factors, like SMS messages to a user’s mobile phone.  However, with a weak password and an ability to intercept SMS messages, an attacker can still gain access. [2]  Still, 2FA makes it incrementally harder for attackers (and therefore better than 1FA).

Lastly, there are real efforts to make 2FA/MFA easier to use online (e.g., FIDO UAF/U2F), but they still lack seamless interoperability across all vendors.  For example, if you enable your Google Account to use a FIDO-compatible Security Key, you won’t be able to access Mail/Calendar/etc using native Apple iOS apps on the iPhone/iPad (you’ll have to use the Google-specific iOS Gmail/Calendar apps). [3]

google-limitation.png

Password Evolution

So others started to focus on fixing the original, core problem of 1FA in parallel — password reuse and weak passwords.  “Password managers” now exist, such as 1Password and KeePass, which simplify a user’s ability to create unique, strong passwords that are only ever used for authenticating to a single online service.

Many online services validate and (sometimes) enforce a minimum password length/complexity, and these approaches incrementally raise the bar against attackers.  But attackers evolve too.  It is extremely cheap for an attackers to spin up cloud computing infrastructure to brute force stolen password hashes. [4]  As these resources become cheaper, service providers are forced to keep raising the bar on password length/complexity, accordingly.  This is our modern day authentication “arms race” across all 1FA service providers.

Conclusion

Online service providers will always need to authenticate users.  To be crystal clear, I am absolutely in favor of promoting easy, ubiquitous multi-factor authentication, long term.  As such, while I applaud the fact that providers are making 2FA and MFA mechanisms easier for normal users, I still believe the “password” factor can and should be improved in parallel.

Will switching to ~1MB authentication files solve all computer security problems?

No, but it seems like a good next step.  We have been waiting for universal MFA for years.  There’s still a lot more work to do with getting MFA used across every service provider.  This proposal might be arguably faster to implement, as a near-term step.

That said, this proposed improvement likely won’t work in all cases.  For example, if you want to read your Gmail from your Apple iPhone, both Google and Apple would have to update their authentication mechanisms to allow users to specify “authentication files” instead of passwords.

But, no one is going to manually type a 1,048,576 character long password into an iOS text box.  Again, password manager apps could help make this process easier by storing and providing these authentication files transparently to online services, upon user request.  These type of code changes to existing authentication mechanisms can be costly and non-trivial to implement; however, I’d argue that such changes are likely less complicated than completely revamping authentication to support something like FIDO UAF/U2F.

On the server-side, we’re proposing changing all password fields from a variable character text field (with a limit of perhaps 128 characters) to a variable multibyte field (with a limit of perhaps 1MB).

On the client-side, we’re proposing transmitting multibyte data as the password, and we cannot expect users to manually type in this data — instead, some sort of password manager tool must exist on the client to auto-generate and manage this data upon user request.  Thankfully, such tools already exist across most client platforms and can be extended to support this functionality.

What other issues remain?

This proposal doesn’t solve all problems.  For example, even if you improve initial authentication using “authentication files” and/or strong MFA controls, there are still other avenues of attack.

For example, consider self-service “account recovery” mechanisms that exist across most service providers.

In many cases, attackers can successfully compromise accounts without any credentials, by tricking users into sending them answers to all account recovery questions. [5]

If 1FA service providers store clear text passwords directly (rather than storing password hashes), then game over.

If an attacker manages to compromise the 1FA service provider directly and successfully steal clear text user passwords, it won’t matter if the password length is 20 characters or 1M characters.

Though, an interesting side-effect of using authentication files is the increased storage costs for those 1FA service providers who choose to insecurely store clear text passwords directly.  Hopefully, they will be incentivized to reduce storage costs and choose to securely store password hashes, instead.

Of course, there are other issues with this proposal that I haven’t explicitly acknowledged.  Feel free to comment below, as I welcome a healthy debate on this issue.  Thanks for reading!

References:
[1] https://www.grc.com/haystack.htm
[2] https://www.wired.com/2016/06/hey-stop-using-texts-two-factor-authentication/
[3] https://support.google.com/accounts/answer/6103523?co=GENIE.Platform%3DiOS&oco=0
[4] http://www.infoworld.com/article/2625330/data-security/amazon-ec2-enables-brute-force-attacks-on-the-cheap.html
[5] https://www.symantec.com/connect/blogs/password-recovery-scam-tricks-users-handing-over-email-account-access

VMware VIX is one of VMware’s many virtualization APIs, which allows you to control and automate activity inside and outside of virtual machines (VMs).  Unlike VMware’s VI API (which is now called vSphere SDK for Web Services or the perl-specific wrapper, called vSphere SDK for Perl), VIX’s checkered past lies with the fact that it was originally developed for VMware Workstation  — while the VI codebase was almost exclusively developed for ESX.  Naturally, this means VI was designed with enterprise features, while VIX recently started supporting ESX with their latest version 1.6.2.  Why use VIX?  Simple.  It’s a very easy way to control programs *inside* the guest OS of the VM.  By using VIX, you can tell the VMware Tools daemon to copy a file, run a program, and even launch a browser.  It all sounds great — until you start trying to use VIX.  So far, I’ve found two major problems:

Read More »

Let’s say you want to create a lot of snapshots for a single VM inside VMware ESX. How many snapshots can you create before your ESX server’s performance goes to crap? Furthermore, is this maximum number relative, depending on how the snapshots are organized in the tree? (For example, can I have more snapshots whose of depth is 1 versus less snapshots of depth 3?)

Those were just some of the questions I’ve been dealing with, in porting the HoneyClient code to ESX.

Here’s the short answer: If your snapshots are getting stored as regular files (which is the default), then UNDER NO CIRCUMSTANCES should you exceed 32 snapshots per VM.

The longer answer: Regardless of how the snapshots are organized, VMware ESX appears to seek through ALL snapshots (in chronological order) when performing ANY snapshot-related operation (e.g., renaming a given snapshot). So, if you create 128 snapshots and just want to rename/revert/alter/delete the 3rd snapshot, then guess what? The ESX server will still iterate through all 128 snapshots before performing the requested operation.

You’d think ESX would be smart enough to perform some sort of depth-first (or even breadth-first search), but no, it doesn’t. This limitation comes from the fact that the snapshot metadata is stored as a FLAT FILE in chronological order.

Wait, it gets better. So it appears that as the number of snapshots increases linearly, the amount of time ESX needs to seek through these snapshots increases in a geometric rate (not quite exponential, but bad enough). And so, this delay becomes noticeable when the number of snapshots is greater than 32.

So, here are my questions to VMware and other ESX users:

1) I’m guessing that this 32 snapshot limit still exists, even if the disks and snapshot data were mapped to a physical iSCSI volume via RDM. Can anyone confirm?

2) VMware: Are you ever planning on improving this snapshotting implementation, for those who want more than 32 snapshots?

3) Anyone aware of similar limitations in other virualization products? (VirtualBox, Xen, etc.?)

So now that VMware has released a free version of ESXi, has anyone had any luck getting it to run natively on any laptops? For example, Dell D6xx or D8xx series? And, no, I’m not talking about running ESXi inside a VM within VMware Workstation.

Update: So apparently ESXi can’t seem to detect IDE-based laptop harddrives, but it can boot directly off USB.  Here’s the link which explains how to boot ESXi off USB.

Assume you have a Linux system with more than one network interface card (NIC) — say eth0 and eth1. By default, administrators can define a single, default route (on eth0). However, if you receive traffic (i.e., ICMP pings) on eth1, the return traffic will go out eth0 by default.
Read More »

Thanks for stopping by. I’ll be posting more content soon, once I figure out the nuances of the WordPress interface. In the meantime, feel free to check out the MITRE Honeyclient Project, as it currently consumes most of my time.